What is the difference between passing a Map and using `body.resolveStrategy = Closure.DELEGATE_FIRST` - groovy

These two examples of encapsulated pipelines get their pipelineParams from two different methods, however it is not readily clear why one is preferable to the other.
What is the ramification of using
def call(body) {
// evaluate the body block, and collect configuration into the object
def pipelineParams= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = pipelineParams
body()
pipeline {
echo pipelineParams.name
}
}
vs using
def call(Map pipelineParams) {
pipeline {
echo pipelineParams.name
}
}
Example code from https://jenkins.io/blog/2017/10/02/pipeline-templates-with-shared-libraries/

The difference is that in the first case, using a pipeline looks like declarative configuration. It's a so-called builder strategy in terms of DSL:
myDeliveryPipeline {
branch = 'master'
scmUrl = 'ssh://git#myScmServer.com/repos/myRepo.git'
...
}
Whereas in the second case, applying a pipeline looks like imperative code, i.e. it's a regular function call:
myDeliveryPipeline(branch: 'master', scmUrl: 'ssh://git#myScmServer.com/repos/myRepo.git', ...)
There is also explanation in the official Jenkins doc:
There is also a “builder pattern” trick using Groovy’s Closure.DELEGATE_FIRST, which permits Jenkinsfile to look slightly more like a configuration file than a program, but this is more complex and error-prone and is not recommended.
Personally, I can't say that I'm not recommending the DSL approach. The doc doesn't recommend this because it's a bit more complex and can be error-prone

Related

Reactive streams map operator never getting executed

For the below code
Mono<String> input =
Mono.just("input")
.map {
println "inside map"
it + "added"
}
.transform {
Mono.just("hello")
}
input.subscribe {println it}
The console looks like as below.
16:11:49.056 [main] DEBUG reactor.util.Loggers$LoggerFactory - Using Slf4j logging framework
hello
The code inside the map function was never executed. I understand that transform method executes at assembly time rather than the subscription.
Why did Reactor just decide to not process my upstream map operator. Did it intelligently decide that since I am not in anyway referring to the output of the map operator that it need not execute map at all ?
Is this behaviour configurable ?
The reason is that transform does not automatically subscribe to your original Mono. It's your responsibility to chain your logic onto it. Since nothing subscribes to it, it will never get triggered.
As the example you sent is dummy, it's difficult to say what would be the right thing to do. It depends on your use case.
A few thing you can do, though:
Get rid of transform and just simply use then operator:
Mono<String> input =
Mono.just("input")
.map {
println "inside map"
it + "added"
}
.then(Mono.just("hello"))
If for some reason you need transform, then chain your logic onto your original Mono:
Mono<String> input =
Mono.just("input")
.map {
println "inside map"
it + "added"
}
.transform {
it.then(Mono.just("hello"))
}

How to reuse code block which describe similiar ant build logic in groovy?

How to reuse code block which describe similiar ant build logic in groovy?
If we have build logic which was implemented by Groovy AntBuilder, just like code below:
ant.someTask(attr1:value1, attr2:value2) {
configuration1(param1:args1, param2:args2){
similiarStructure(additionalArgs:aaa){
setting1(param5:value5) {
//...blah blah blah
}
//further more settings, may be or may be not the same with similiarStructure below
}
}
configuration2(param3:args3, param4:args4){
similiarStructure(additionalArgs:aaa){
setting1(param5:value5) {
//...blah blah blah
}
//further more settings, may be or may be not the same with similiarStructure below
}
}
}
Are there any ways to reuse Groovy AntBuilder code block, which could brief the statment in configuration2 ?
I've try to predefine closures and inject them in both configuration,
but it fails with property not found exception while initializing closure.
I'll provide two answers so you can select which one is more appropriate for your use case and test it. The solutions depend on at what level you want the shared config.
If you want a more general purpose solution that allows you to share the whole of the similarStructure block, you need to perform some more advanced work. The trick is to ensure that the delegate of the shared configuration closure is set appropriately:
def sharedConfig = {
similarStructure(additionalArgs:aaa) {
setting1(param5:value5) {
//...blah blah blah
}
}
}
ant.someTask(attr1: value1, attr2: value2) {
configuration1(param1:args1, param2:args2){
applySharedConfig(delegate, sharedConfig)
}
configuration2(param3:args3, param4:args4){
applySharedConfig(delegate, sharedConfig)
}
}
void applySharedConfig(builder, config) {
def c = config.clone()
c.resolveStrategy = Closure.DELEGATE_FIRST
c.delegate = builder
c.call()
}
Although the applySharedConfig() method seems ugly, it can be used to share multiple configurations across different tasks.
One thing to bear in mind with this solution is that the resolveStrategy of the closure can be very important. I think both DELEGATE_FIRST and OWNER_FIRST (the default) will work fine here. If you run into what appear to be name resolution problems (missing methods or properties) you should try switching the resolution strategy.
I'll provide two answers so you can select which one is more appropriate for your use case and test it. The solutions depend on at what level you want the shared config.
If you are happy to simply share the closure that goes with similarStructure, then the solution is straightforward:
def sharedConfig = {
setting1(param5:value5) {
//...blah blah blah
}
}
ant.someTask(attr1: value1, attr2: value2) {
configuration1(param1:args1, param2:args2) {
similarStructure(additionalArgs:aaa, sharedConfig)
}
configuration2(param3:args3, param4:args4) {
similarStructure(additionalArgs:aaa, sharedConfig)
}
}
The method that is similarStructure should ensure that the sharedConfig closure is properly configured. I haven't tested this, so I'm not entirely sure. The disadvantage of this approach is that you have to duplicate the similarStructure call with its arguments.

Workaround for lack of generators/yield keyword in Groovy

Wondering if there is a way I can use sql.eachRow like a generator, to use it in a DSL context where a Collection or Iterator is expected. The use case I'm trying to go for is streaming JSON generation - what I'm trying to do is something like:
def generator = { sql.eachRow { yield it } }
jsonBuilder.root {
status "OK"
rows generator()
}
You would need continuation support (or similiar) for this to work to some extend. Groovy does not have continuations, the JVM also not. Normally continuation passing style works, but then the method eachRow would have to support that, which it of course does not. So the only way I see is a makeshift solution using threads or something like that. So maybe something like that would work for you:
def sync = new java.util.concurrent.SynchronousQueue()
Thread.start { sql.eachRow { sync.put(it) } }
jsonBuilder.root {
status "OK"
rows sync.take()
}
I am not stating, that this is a good solution, just a random consumer-producer-work-around for your problem.

Gradle - Can I manage task dependencies using convention properties?

I'm using Gradle as a build system for my project.
What I want is to make task A depend on task B if a given property is set to "true". Is this doable, and if the answer is yes, how can I do that?
Currently, I'm using conventionMapping but this doesn't seem to work. My code looks like this:
MyTask.conventionMapping.propertyName = { MyPluginConvention.propertyName }
if (MyTask.propertyName.equals("true")) {
MyTask.dependsOn ...
}
Thanks in advance,
Marin
Instead of working with task/convention classes, you'll have to work with their instances. Also, you'll have to defer the decision whether to add a task dependency. For example:
def myTask = project.tasks.create("myTask", MyTask)
def otherTask = ...
def myConvention = new MyConvention()
...
myTask.conventionMapping.propertyName = { myConvention.propertyName }
// defer decision whether to depend on 'otherTask'
myTask.dependsOn { myTask.propertyName == "true" ? otherTask : [] }
If there's no task variable in scope, you can also reference existing tasks via project.myTask or project.tasks["myTask"].
PS: Convention objects have been largely replaced by extension objects.

Why missingMethod is not working for Closure?

UPDATE
I have to apologize for confusing the readers. After I got totally lost in the code, I reverted all my changes from Mercurial repo, carefully applied the same logic as before -- and it worked. The answers below helped me understand the (new to me) concept better, and for that I gave them upvotes.
Bottom line: if a call to a missing method happens within a closure, and resolution set to DELEGATE_FIRST, methodMissing() will be called on the delegate. If it doesn't -- check you own code, there is a typo somewhere.
Thanks a lot!
Edit:
OK, now that you've clarified what your are doing (somewhat ;--))
Another approach (one that I use for DSLs) is to parse your closure group to map via a ClosureToMap utility like this:
// converts given closure to map method => value pairs (1-d, if you need nested, ask)
class ClosureToMap {
Map map = [:]
ClosureToMap(Closure c) {
c.delegate = this
c.resolveStrategy = Closure.DELEGATE_FIRST
c.each{"$it"()}
}
def methodMissing(String name, args) {
if(!args.size()) return
map[name] = args[0]
}
def propertyMissing(String name) { name }
}
// Pass your closure to the utility and access the generated map
Map map = new ClosureToMap(your-closure-here)?.map
Now you can iterate through the map, perhaps adding methods to applicable MCL instance. For example, some of my domains have dynamic finders like:
def finders = {
userStatusPaid = { Boolean active = true->
eq {
active "$active"
paid true
}
}
}
I create a map using the ClosureToMap utility, and then iterate through, adding map keys (methods, like "userStatus") and values (in this case, closure "eq") to domain instance MCL, delegating the closure to our ORM, like so:
def injectFinders(Object instance) {
if(instance.hasProperty('finders')) {
Map m = ClosureToMap.new(instance.finders).map
m?.each{ String method, Closure cl->
cl.delegate = instance.orm
cl.resolveStrategy = Closure.DELEGATE_FIRST
instance.orm.metaClass."$method" = cl
}
}
}
In this way in controller scope I can do, say:
def actives = Orders.userStatusPaid()
and "eq" closure will delegate to the ORM and not domain Orders where an MME would occur.
Play around with it, hopefully I've given you some ideas for how to solve the problem. In Groovy, if you can't do it one way, try another ;--)
Good luck!
Original:
Your missingMethod is defined on string metaclass; in order for it to be invoked, you need "someString".foo()
If you simply call foo() by itself within your closure it will fail, regardless of delegation strategy used; i.e. if you don't use the (String) delegate, good luck. Case in point, do "".foo() and it works.
I don't fully understand the issue either, why will you not have access to the closure's delegate? You are setting the closure's delegate and will invoke the closure, which means you will have access to the delegate within the closure itself (and can just delegate.foo())
nope, you will not catch a missing method and redirect it to the delegate with metaclass magic.
the closure delegate is the chance to capture those calls and adapt them to the backing domain.
that means...
you should create your own delegate with the methods required by the dsl.
do not try to force a class to do delegate work if it's not designed for the task, or the code will get really messy in not time.
keep everything dsl related in a set of specially designed delegate classes and everything will suddenly become ridiculously simple and clear.

Resources