Special Groovy magic re property access and collections / iterables? - groovy

I understand what is happening here with the spread operator *. in Groovy (2.4.3):
[].class.methods*.name
=> [add, add, remove, remove, get, ...
But why does the leaving the * out produce the same results?
[].class.methods.name
=> [add, add, remove, remove, get, ...
I'd have expected that to be interpreted as accessing the name property of the java.lang.reflect.Method[] returned by methods and so be an error. But it seems to work. Having then experimented a bit more, so do the following:
[*[].class.methods].name
=> [add, add, remove, remove, get, ...
([].class.methods.toList()).name
=> [add, add, remove, remove, get, ...
So it appears attempting to access a property of an array or list (perhaps even Iterable) actually returns a list of that property for each element of the list (as the spread operator would).
So this leaves me wondering:
Is this behaviour documented anywhere? (I don't see it here for example: http://www.groovy-lang.org/operators.html and haven't seen it noted elsewhere in the docs.)
Does this behaviour only apply to 'properties' (i.e. non-arg methods following the getFoo() naming convention)? This seems to be the case from some quick GroovyConsole tests.
Is the spread operator therefore only necessary/useful when calling non-getFoo() style methods or methods with arguments? (Since you can just use . otherwise.)
UPDATE:
It appears to be the case that spread *. works for any Iterable whereas the . only applies to collections. For example:
class Foo implements Iterable {
public Iterator iterator() { [Long.class, String.class].iterator() }
}
(new Foo())*.name
=> [java.lang.Long, java.lang.String]
(new Foo()).name
=> groovy.lang.MissingPropertyException: No such property: name for class: Foo
(I guess this is a good thing: if the Iterable itself later gained a property with the same name, the code would start returning that (single) property from the Iterable - rather than the list property values from the elements.)

That's the GPath expression documented (ish) here, and yes, it only works for properties. (There's an old blog post by Ted Naleid here about digging in to how it worked in 2008)
For methods, you need to use *. or .collect()
See also: Groovy spread-dot operator
Better docs link (as pointed out by #NathanHughes below

Related

Difference between different "resolve" functions in a custom NSMergePolicy

When implementing a custom NSMergePolicy, there are 3 functions available to overload:
final class MyMergePolicy: NSMergePolicy {
override func resolve(mergeConflicts list: [Any]) throws {
// ...
try super.resolve(mergeConflicts: list)
}
override func resolve(optimisticLockingConflicts list: [NSMergeConflict]) throws {
// ...
try super.resolve(optimisticLockingConflicts: list)
}
override func resolve(constraintConflicts list: [NSConstraintConflict]) throws {
// ...
try super.resolve(constraintConflicts: list)
}
}
Documentation for all 3 is exactly the same, it says: "Resolves the conflicts in a given list.", and I can't seem to find much information online.
What's the difference between these functions? What are the appropriate use cases for each of them?
The documentation kind of sucks here but you can get a partial explanation by looking at the arguments the functions receive.
resolve(optimisticLockingConflicts list: [NSMergeConflict]): Gets a list of one or more NSMergeConflict. This is what you'll usually hear about as a merge conflict, when the same underlying instance is modified on more than one managed object context.
resolve(constraintConflicts list: [NSConstraintConflict]): Gets a list of one or more NSConstraintConflict. This happens if you have uniqueness constraints on an entity but you try to insert an instance with a duplicate value.
The odd one out is resolve(mergeConflicts list: [Any]). This one is basically a leftover from the days before uniqueness constraints existed. It gets called for both types of conflict described above-- but only if you don't implement the more-specific function. So for example if you have a constraint conflict, resolve(constraintConflicts:...) gets called if you implemented it. If you didn't implement it, the context tries to fall back on resolve(mergeConflicts list: [Any]) instead. The same process applies for merge conflicts-- the context uses one function if it exists, and can fall back on the other. Don't implement this function, use one of the other two.
For both conflict types, the arguments give you details on the conflict, including the objects with the conflict and the details of the conflict. You can resolve them however you like.

Tell AutoMapper that a field is a top-level member

I am trying to take some of the pain out of creating mapping expressions in AutoMapper, using AutoMapper.QueryableExtensions
I have the following, which gives a critical performance gain:
private MapperConfiguration CreateConfiguration() {
return new MapperConfiguration(cfg => cfg.CreateMap<Widget, WidgetNameDto>()
.ForMember(dto => dto.Name,
conf => conf.MapFrom(w => w.Name)));
}
To understand the performance gain, see here: https://github.com/AutoMapper/AutoMapper/blob/master/docs/Queryable-Extensions.md The key is that the query is limited by field at the database level.
It's terrific that this works. But I anticipate needing to do a lot of this kind of projecting. I am trying to take some of the pain out of the syntax in the ForMember clause above.
For example, I've tried this:
public static IMappingExpression<TFrom, TTo> AddProjection<TFrom, TTo, TField>(this IMappingExpression<TFrom, TTo> expression,
Func<TFrom, TField> from,
Func<TTo, TField> to
)
=> expression.ForMember(t => to(t), conf => conf.MapFrom(f => from(f)));
The problem is that everything I do runs into an error:
AutoMapper.AutoMapperConfigurationException : Custom configuration for members is only supported for top-level individual members on a type.
Even if the passed in Funcs are top-level individual members, that fact is lost in the passing, so I hit the error. I've also tried changing Func<Whatever> to Expression<Func<Whatever>>. It doesn't help.
Is there any way I can simplify the syntax of the ForMember clause? Ideally, I would just pass in the two relevant fields.
First, there is no need to add mapping for the fields/properties that match by name - AutoMapper maps them automatically by convention (that's why it is called convention-based object-object mapper). And for including just some of the properties in the projection you could use the Explicit expansion feature.
Second, what you call a pain in the ForMember syntax is in fact a flexibility. For instance, the explicit expansion and other behaviors can be controlled by conf argument, so it's not only for specifying the source.
With that being said, what you ask is possible. You have to change the from/ to type to Expression:
Expression<Func<TFrom, TField>> from,
Expression<Func<TTo, TField>> to
and the implementation simply as follows:
=> expression.ForMember(to, conf => conf.MapFrom(from));

How to define and call a function in Jenkinsfile?

I've seen a bunch of questions related to this subject, but none of them offers anything that would be an acceptable solution (please, no loading external Groovy scripts, no calling to sh step etc.)
The operation I need to perform is a oneliner, but pipeline limitations made it impossible to write anything useful in that unter-language...
So, here's minimal example:
#NonCPS
def encodeProperties(Map properties) {
properties.collect { k, v -> "$k=$v" }.join('|')
}
node('dockerized') {
stage('Whatever') {
properties = [foo: 123, bar: "foo"]
echo encodeProperties(properties)
}
}
Depending on whether I add or remove #NonCPS annotation, or type declaration of the argument, the error changes, but it never gives any reason for what happened. It's basically random noise, that contradicts the reality of the situation (at times it would claim that some irrelevant object doesn't have a method encodeProperties, other times it would say that it cannot find a method encodeProperties with a signature that nobody was trying to call it with (like two arguments instead of one) and so on.
From reading the documentation, which is of disastrous quality, I sort of understood that maybe functions in general aren't serializable, and that is why you need to explain this explicitly to the Groovy interpreter... I'm sorry, this makes no sense, but this is roughly what documentation says.
Obviously, trying to use collect inside stage creates a load of new errors... Which are, at least understandable in that the author confesses that their version of Groovy doesn't implement most of the Groovy standard...
It's just a typo. You defined encodeProperties but called encodeProprties.

Does groovy ignore the type of null values in method signatures?

To illustrate the following example I created a litte spock test (but it's about groovy itself, not spock):
void "some spock test"() {
given: String value = null
expect: someMethod(value) == 3
}
int someMethod(String s) {
return 3
}
int someMethod(Map s) {
return 5
}
There are two methods who's signatures only differ by the type of the given parameter. I thought that when I give it a null value that is explicitly typed as a string, the string-method will be called.
But that doesn't happen; the test fails, because the map-method is called! Why?
I guess groovy ignores the type and treats all nulls the same. There seems to be some kind of priority of types: When I use Object instead of Map as the parameter type of the wrong-method, its all the same, but when I for instance use Integer, the test succeeds.
But than again: If groovy really ignores the type of nulls, why can the following fix the original test:
expect: someMethod((String) value) == 3
If you read my answer to the question Tim already mentioned you will see that I talk there about runtime types. The static type plays normally no role in this. I also described there how the distance calculation is used and that for null the distance to Object is used to determine the best fitting method. What I did not mention is that you can force method selection by using a cast. Internally Groovy will use a wrapper for the object, that also transports the type. Then the transported type is used instead. But you surely understand, that this means a one additional object creation per method class, which is very inefficient. Thus it is not the standard. In the future Groovy maybe change to include that static type information, but this requires a change to the MOP as well. And that is difficult

Groovy difference between 'any' and 'find' methods

In groovy, there are two methods namely any and find method that can be used in Maps.
Both these methods will "search" for the content that we are interested in (that is, both any and find method return whether the element is in Map or not, that is they need to search).
But within this search how do they differ?
They actually do different things. find returns the actual element that was found whereas any produces a bool value. What makes this confusing for you is the groovy truth.
Any unset (null?) value will resolve to false
def x
assert !x
So if you are just checking for false, then the returned values from both methods will serve the same purpose, since essentially all objects have an implicit existential boolean value.
(!list.find{predicate}) <> (!list.any{predicate})
However :
( list.find{predicate}) >< (list.any{predicate})
If any does not exist in Groovy API and you want to add this feature to List metClass, any implementation will be :
java.util.List.metaClass.any={Closure c->
return delegate.find(c) != null
}
Find is more general than any

Resources