Groovy inject without a similar Java 8 reduce - groovy

Normally I think of Groovy's inject method as equivalent to Java 8's reduce, but I seem to have hit an unusual situation.
Say I have a POJO (or POGO) called Book
class Book {
int id
String name
}
If I have a collection of books and want to convert them to a map where the keys are the ids and the values are the books, then in Groovy it's easy enough to write:
Map bookMap = books.inject([:]) { map, b ->
map[b.id] = b
map
}
i.e., for each book, add it to the map under the book's id and return the map.
In Java 8, the same operation would take a completely different approach. Either this:
Map<Integer, Book> bookMap = books.stream()
.collect(Collectors.toMap(Book::getId, b -> b));
or, equivalently,
bookMap = books.stream()
.collect(Collectors.toMap(Book::getId, Function.identity()));
the difference being a matter of style.
What I'm wondering, however, is if there is a reduce operation in Java 8 that would be similar to the inject from Groovy. I can't just mimic what I did in Groovy, because in Java 8 the signature for reduce is:
T reduce(T identity, BinaryOperator<T> accumulator)
The BinaryOperator means that both elements of the lambda expression must be of the same type. If it was a BiFunction, I could make the lambda's first argument a HashMap<Integer, Book> and the second argument a Book, but I can't do that with a BinaryOperator. I know there's a three-argument version of reduce, but that doesn't seem to help either.
Am I missing something obvious? Is it just that inject is more general that reduce? Since I already have an idiomatic way of solving the problem in Java, this isn't critical, but I was struck by the differences here.

Yo Ken! :-D
You need the 3 parameter form of reduce, so given:
List<Book> books = Arrays.asList(
new Book(1, "Book One"),
new Book(2, "Tim's memoirs"),
new Book(3, "Harry Potter and the sarcastic cat")
);
You can do:
Map<Integer, Book> reduce = books.stream().reduce(
new HashMap<Integer, Book>(),
(map, value) -> {
map.put(value.id, value);
return map;
},
(a, b) -> {
a.putAll(b);
return a;
}
);
To give:
{
1=Book{id=1, name='Book One'},
2=Book{id=2, name='Tim's memoirs'},
3=Book{id=3, name='Harry Potter and the sarcastic cat'}
}
The first parameter is the thing to collect into:
new HashMap<Integer, Book>(),
The second parameter is a BiFunction that takes the current accumulator, and the next element in the stream, and combines them somehow:
(map, value) -> {
map.put(value.id, value);
return map;
},
The third binary operator in that reduce call:
(a, b) -> {
a.putAll(b);
return a;
}
Is how to join all the resulting maps back together assuming you are running a parallel stream...
put and putAll returning void make it a fugly mess :-( But I guess chaining wasn't a popular thing back in the late 90s...

Related

Comparator vs Closure in min call in groovy?

I am trying to understand the difference between passing a Closure vs a Comparator to the min function on a collection:
// Example 1: Closure/field/attribute?
Sample min = container.min { it.timespan.start }
// Example 2: Comparator
Sample min2 = container.min(new Comparator<Sample>() {
#Override
int compare(Sample o1, Sample o2) {
return o1.timespan.start <=> o2.timespan.start
}
})
They both return the correct result.
Where:
class Sample {
TimeSpan timespan
static constraints = {
}
}
And:
class TimeSpan {
LocalDate start
LocalDate end
}
In Example 1 I just pass the field timespan.start to min which I guess means that I am passing a Closure (even though its just a field in a class)?
In Example 1 does groovy convert the field timespan.start into a Comparator behind the scenes like the one I create explicitly in Example 2?
The difference is, that those are two different min methods both
taking different arguments. There is one for passing
a closure
and one for the
comparator
(there is a third one using identity and some deprecated ones, but we can ignore that for now).
The first version (Closure with one (implicit argument)) you have to
extract the value from the passed value, you want to make the min
aggregate with. Therefor this versions has some inner working to deal
with comparing the values.
But the docs also state:
If the closure has two parameters it is used like a traditional
Comparator. I.e. it should compare its two parameters for order,
returning a negative integer, zero, or a positive integer when the
first parameter is less than, equal to, or greater than the second
respectively. Otherwise, the Closure is assumed to take a single
parameter and return a Comparable (typically an Integer) which is then
used for further comparison.
So you can use a Closure version also to the same as your second example
(you have to define two params explicitly):
container.min{ a, b -> a <=> b }
And there is also a shorter version of the second example. You can cast
a Closure to an interface with groovy. So this works too:
container.min({ a, b -> a <=> b } as Comparator)

What is the way to implement Observable other than Control.Lens.Setter in Haskell?

Control.Lens.Setter is remarkable to use Observable feature in Haskell. ( function/functor to be triggered when the value in the dataset is updated. )
However, considering the lens is not included in the standard environment, and then required to extra installation, without the lens, when I want to use just a primitive setter feature such as with a field label:
data Foo = Foo {val :: Int}
How to do this?
Does ST Monad fit this purpose?
Thanks.
It's not very clear what you're looking for here, but if you just want to update a record field, you can use the Haskell record update syntax:
x = Foo { val = 5 }
y = x { val = 42 }
This works for any record, with any number of fields, and you don't need to list all the fields, only the ones you'd like to update, for example:
data D = D { a :: String, b :: Int }
x = D { a = "foo", b = 42 }
y = x { a = "bar" } -- now y = D { a = "bar", b = 42 }
z = x { b = 43 } -- now z = D { a = "foo", b = 43 }
Keep in mind that this doesn't actually update ("change", "mutate") the values in memory, but rather creates copies of the records with all the fields equal to those of the original record, except the updated fields. Lenses work the same way, and in fact everything in Haskell does, since Haskell doesn't allow mutation at all.
I think that you're a bit confused between a few concepts; specifically, setter vs observable.
I don't know much about them, but an observable does seem to be, as you described, a way to run a function every time a value is changed. This concept does not exist in Haskell; I'm sure you could make it somehow, but it wouldn't be very useful, since (almost) every Haskell value is immutable.
By contrast, a setter is just a function which changes part of a value, the same as with Java getters and setters. The only slight complication is that in Haskell, all values are immutable (as I mentioned above), so instead of changing the value, a setter copies the value across but with the thing set. For instance, if you have a setter setName (for a record), and you invoke it with something like setName "foo" oldRecord, this will keep oldRecord as is, but return a new record with the name set to "foo". As you already know, there are other, more complex, implementations of setters, such as lenses.
You also mention ST. This is a more advanced concept; it's basically used when you have to use some sort of mutable variable locally but still retain purity (which is not very often).
Now to answer your other question: How do I use setters without lenses? Well, if you have a record like data Foo = Foo {val :: Int, val2 :: String} (your example + an extra field), and you have an old record - let's say oldRecord = Foo { val = 1, val2 = "test" }, Haskell has special setter syntax: doing oldRecord { val = 2 } will give a new record with val set to 2 - that is, it will give Foo { val = 2, val2 = "test" }. Obviously this syntax is a bit clumsy though for nested records, which is the reason lenses were invented.

Null-safe dereferencing operator for older compilers?

I have an old compiler server (VS 2010), which, obviously, can't compile such instructions:
var result = a?.b()?.c?.d;
Is there alternative I can use? Is it possible to do this through expression tree? For example, like this:
var result = NullSafe(()=> a.b().c.d);
There were quite a few attempts to do this before it became a language feature. It's a bit hard to find the references now, but you can get an idea how it can be done and why it's not that easy.
This snippet for example looks simple:
public static R NullSafe<T, R>(this T obj, Func<T, R> f) where T : class
{
return obj != null ? f(obj) : default(R);
}
You can use it almost like an operator:
deliveryCode = order.NullSafe(o => o.DeliveryCompany).NullSafe(dc => dc.FileArtworkCode);
But it doesn't work with value types. This older snippet uses EqualityComparer :
public static TOut NullSafe<TIn, TOut>(this TIn obj, Func<TIn, TOut> memberAction)
{
//Note we should not use obj != null because it can not test value types and also
//compiler has to lift the type to a nullable type for doing the comparision with null.
return (EqualityComparer<TIn>.Default.Equals(obj, default(TIn)))
? memberAction(obj)
: default(TOut);
}
It will take a bit of digging to find more complete examples. I remember trying methods similar to these way back when until I found a more complete one.
This SO answer to a similar question does away with chaining and allows one to write:
foo.PropagateNulls(x => x.ExtensionMethod().Property.Field.Method());
The implementation is a bit involved though, to say the least.

Problems overloading Groovy comparison operators

I am building an analytic application using Groovy and require very forgiving math operators regardless of data format. I achieve this through operator overloading, in many cases improving (in my case) on the default Groovy type flexibility. As an example, I need 123.45f + "05" to equal 128.45f. By default Groovy downgrades to String and I get 123.4505.
In most cases my overloading works very well, but not for comparison operators. I've followed a couple of discussions on this, but I'm not getting to an answer and I'm looking for ideas. I recognize that the goal is to overload compareTo() (vs. something like lessThan), but Groovy seems to ignore this and instead attempts its own smart comparison - e.g. DefaultTypeTransformation.compareTo(Object left, Object right), which fails on mixed types.
Unfortunately this is a must have for me, because improperly comparing two values compromises the whole solution and I don't have control over some of the data types being analyzed (e.g. vendor data structures).
For example, I need the following to work:
Float f = 123.45f;
String s = "0300";
Assert.assertTrue( f < s );
I have many permutations of these, but my attempt to overload includes (let's just assume my JavaTypeUtil does what I need if I can get Groovy to call it):
// overloads on startup, trying to catch all cases
Float.metaClass.compareTo = {
Object o -> JavaTypeUtil.compareTo(delegate, o) }
Float.metaClass.compareTo = {
String s -> JavaTypeUtil.compareTo(delegate, s) }
Object.metaClass.compareTo = {
String s -> JavaTypeUtil.compareTo(delegate, s) }
Object.metaClass.compareTo = {
Object o -> JavaTypeUtil.compareTo(delegate, o) }
When I try the above test, none of these are called and instead I get:
java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Float
at java.lang.Float.compareTo(Float.java:50)
at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.compareToWithEqualityCheck(DefaultTypeTransformation.java:585)
at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.compareTo(DefaultTypeTransformation.java:540)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.compareTo(ScriptBytecodeAdapter.java:690)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.compareLessThan(ScriptBytecodeAdapter.java:710)
at com.modelshop.datacore.generator.GroovyMathTests.testMath(GroovyMathTests.groovy:32)
Debugging through I see that the < operator goes right to ScriptBytecodeAdapter.compareLessThan(), and the implementation of that seems to ignore the overloaded compareTo() here:
In DefaultTypeTransformations.java:584 (2.4.3)
if (!equalityCheckOnly || left.getClass().isAssignableFrom(right.getClass())
|| (right.getClass() != Object.class && right.getClass().isAssignableFrom(left.getClass())) //GROOVY-4046
|| (left instanceof GString && right instanceof String)) {
Comparable comparable = (Comparable) left;
return comparable.compareTo(right); // <--- ***
}
In a desperate attempt, I've also tried to overload compareLessThan, but I'm grasping now, I don't know that there's any way to jump in front of the < mapping in Groovy.
Float.metaClass.compareLessThan << {
Object right -> JavaTypeUtil.compareTo(delegate, right) < 0 }
Float.metaClass.compareLessThan << {
String right -> JavaTypeUtil.compareTo(delegate, right) < 0 }
Any thoughts on a work-around? Thanks!
Part of it is you need to include static, like this
Float.metaClass.static.compareTo = { String s -> 0 }
This makes f.compareTo(s) work, but the < operator still won't work. This is a known limitation. The only operators that can be overloaded are mentioned in the documentation. Possibly you could do a custom AST to change all those operators to a compareTo().
But this isn't the whole story. f <=> s also doesn't work, despite <=> delegating to compareTo(). I believe this is because Float doesn't implement Comparable<Object> or Comparable<String>, only Comparable<Float>. Although I'm not sure where exactly in the chain Groovy makes the decision not to use that method, you can see it's not limited to Groovy's math classes. This also doesn't work
Foo.metaClass.compareTo = { String s -> 99 }
new Foo() <=> ''
class Foo implements Comparable<Foo> {
int compareTo(Foo o) {
0
}
}
I think Groovy is doing some pre-parsing validation that's preventing the metaclass stuff from working. Whatever validation it's doing definitely inspects the interfaces implemented, because this fails for a different reason
Foo.metaClass.compareTo = { String s -> 99 }
new Foo() <=> ''
class Foo {
int compareTo(Foo o) {
0
}
}
In both of these examples, replacing <=> with compareTo() works.
This question has been asked a couple times before, but I haven't seen a good explanation for why. You might try asking on the user mailing list. I'm sure Jochen or Cedric would be able to explain why.
I guess the point is that your compareTo closure is expecting an Object; so when you invoke compareTo with a String, your closure doesn't get called at all.
I can only think of the following; being precise when specifying closure input parameter type:
Float.metaClass.compareTo = { Integer n -> aStaticHelperMethod(n) }
Float.metaClass.compareTo = { String s -> aStaticHelperMethod(s) }
Float.metaClass.compareTo = { SomeOtherType o -> aStaticHelperMethod(o) }

Groovy named and default arguments

Groovy supports both default, and named arguments. I just dont see them working together.
I need some classes to support construction using simple non named arguments, and using named arguments like below:
def a1 = new A(2)
def a2 = new A(a: 200, b: "non default")
class A extends SomeBase {
def props
A(a=1, b="str") {
_init(a, b)
}
A(args) {
// use the values in the args map:
_init(args.a, args.b)
props = args
}
private _init(a, b) {
}
}
Is it generally good practice to support both at the same time? Is the above code the only way to it?
The given code will cause some problems. In particular, it'll generate two constructors with a single Object parameter. The first constructor generates bytecode equivalent to:
A() // a,b both default
A(Object) // a set, b default
A(Object, Object) // pass in both
The second generates this:
A(Object) // accepts any object
You can get around this problem by adding some types. Even though groovy has dynamic typing, the type declarations in methods and constructors still matter. For example:
A(int a = 1, String b = "str") { ... }
A(Map args) { ... }
As for good practices, I'd simply use one of the groovy.transform.Canonical or groovy.transform.TupleConstructor annotations. They will provide correct property map and positional parameter constructors automatically. TupleConstructor provides the constructors only, Canonical applies some other best practices with regards to equals, hashCode, and toString.

Resources