Is there any practical difference between the following two approaches to casting:
result.count = (int) response['hits']['total']
vs
result.count = response['hits']['total'] as int
I'm using #CompileStatic and the compiler is wanting me to do the cast - which got me wondering if there was any performance or practical difference between the two notations.
The main difference is casting uses the concept of inheritance to do the conversion where the as operator is a custom converter that might or might not use the concepts of inheritance.
Which one is faster?
It depends on the converter method implementation.
Casting
Well, all casting really means is taking an Object of one particular
type and “turning it into” another Object type. This process is called
casting a variable.
E.g:
Object object = new Car();
Car car = (Car)object;
As we can see on the example we are casting an object of class Object into a Car because we know that the object is instance of Car deep down.
But we cant do the following unless Car is subclass of Bicycle which in fact does not make any sense (you will get ClassCastException in this case):
Object object = new Car();
Bicycle bicycle = (Bicycle)object;
as Operator
In Groovy we can override the method asType() to convert an object
into another type. We can use the method asType() in our code to
invoke the conversion, but we can even make it shorter and use as.
In groovy to use the as operator the left hand operand must implement this method:
Object asType(Class clazz) {
//code here
}
As you can see the method accepts an instance of Class and implements a custom converter so basically you can convert Object to Car or Car to Bicycle if you want it all depends on your implementation.
Related
Assuming that I want that following Value Object contains always capitalized String value. Is it eligible to do it like this with toUpperCase() in constructor?
class CapitalizedId(value: String) {
val value: String = value.toUpperCase()
// getters
// equals and hashCode
}
In general, I do not see a problem of performing such a simple transformation in a value object's constructor. There should of course be no surprises for the user of a constructor but as the name CapitalizedId already tells you that whatever will be created will be capitalized there is no surprise, from my point of view. I also perform validity checks in constructors to ensure business invariants are adhered.
If you are worried to not perform operations in a constructor or if the operations and validations become too complex you can always provide factory methods instead (or in Kotlin using companion, I guess, not a Kotlin expert) containing all the heavy lifting (think of LocalDateTime.of()) and validation logic and use it somehow like this:
CapitalizedId.of("abc5464g");
Note: when implementing a factory method the constructor should be made private in such cases
Is it eligible to do it like this with toUpperCase() in constructor?
Yes, in the sense that what you end up with is still an expression of the ValueObject pattern.
It's not consistent with the idea that initializers should initialize, and not also include other responsibilities. See Misko Hevery 2008.
Will this specific implementation be an expensive mistake? Probably not
I am trying to understand if there could be any issues with Predicate defined at class level in multithreaded application. ? We have defined such predicates in our services and using them up different methods of same class. Currently we have not seen any issue but I am curious to understand our class level Predicate object is going to function. will there be any inconsistency in the behaviour?
eg:
class SomeClass{
Predicate<String> check = (value) -> value.contains("SomeString");
// remaning impl. of the class.
}
The predicate in your example is categorically thread-safe. It is calling a method on an intrinsicly thread-safe (and immutable) object.
This does not generalize to all predicates though. For example
Predicate<StringBuilder> check = (value) -> value.indexOf("SomeString") >= 0;
is not thread-safe. Another thread could mutate the contents of the StringBuilder argument while this predicate is checking it. The predicate could also be vulnerable to memory model related inconsistencies.
(The StringBuilder class is not thread-safe; see javadoc.)
It is not clear what you mean by "class level". Your example shows a predicate declared as a regular field, not a static (class level) field.
With a variable declared as a (mutable) instance field, it is difficult to reason about the thread-safety of the field in isolation. This can be solved by declaring the field as final.
Just yesterday, I decided to begin learning the Haxe programming language after having used Actionscript 3 for the past few years. Today I have been exploring abstract types, and I have come to realize that they seem quite different from abstract classes in Java. I am beginning to grasp some of what they do, but I am unsure of what abstracts are used for. What constitutes the proper use of abstracts in Haxe, and when ought I to favor them over classes?
For instance, below is an incomplete definition for a complex number type using an abstract type. Ought I to prefer this or just an ordinary class?
abstract Complex({real:Float, imag:Float}) {
public function new(real:Float, imag:Float) {
this = { real: real, imag: imag };
}
public function real():Float { return this.real; }
public function imag():Float { return this.imag; }
#:op(A + B)
public static function add(lhs:Complex, rhs:Complex):Complex {
return new Complex(lhs.real() + rhs.real(), lhs.imag() + rhs.imag());
}
public function toString():String {
return real() + " + " + imag() + "i";
}
}
Indeed abstracts are not at all like abstract classes in Java. Abstract types in Haxe are powerful and interesting. Their main characteristic is that they are types that exist only at compile-time. At runtime they are entirely replaced by the wrapped type. Methods are transformed into static functions. In the case you described all of your instances will be replaced by anonymous objects with the two fields real and imag. Is that a good use case? Probably yes since a Complex type is not meant to be extended and you probably want to define some operator overloading (as you did for the addition).
To keep it even more light-weight you could use an Array<Float> as the wrapped type where the first element is the real part and the second the imaginary one.
So what is good about abstract types?
they add semantic to types (particularly primitive types). For example you could define an abstract RGB(Int) {} to always output very efficient color encoding with the benefit of methods and properties. Or you could have an abstract Path(String) {} to conveniently deal with path concatenation, relative paths and the like.
you can define operator overloading. In the case above you could do something like white + black and get something meaningful out of it.
similarly to operator overloading, abstracts can define implicit casts from and to other types. In the case of the RGB above you could easily define a method fromString() to parse an hex string into an Int representing a color. With the implicit cast you could do: var color : RGB = "#669900";. thx.color defines a lot of abstracts for color handling.
they are ideal to wrap the very powerful Enums in Haxe. With an abstract you can add methods and properties to enumerations (that natively do not support any of that).
they are ideal to wrap optimized code. Abstract methods can be inlined and the wrapped type ensures that you are not adding any additional layer of indirection when executing your code.
What is not so good? Or better, what we should know about abstracts?
since they are just a compile-time artifact, you cannot use runtime checks (eg: no Std.is(value, MyAbstract)).
abstracts are not classes, so no inheritance.
class SimpleTest {
void met( Object a ) {
println "Object"
}
void met( String b ) {
println "String"
}
static main( args ) {
SimpleTest i = new SimpleTest()
i.met(null)
}
}
This code will produce the output "Object". It will not choose the most specialized version of the method. In this case String is more specialized than Object, so this rule does not apply.
Groovy uses a distance calculation approach. Basically if you imagine the classes and interfaces as nodes in a graph and them being connected by their inheritance relationship, then we kind of look for the distance from our given argument type (the runtime type of the argument) to our parameter type (the static type the method parameter has). The connections have different weights, basically going to the super class means a distance of I think 3, to an interface 1, wrapping a primitive is also 1, vargs wrapping has also a weight (and cannot really be represented in the graph anymore, so sorry for the slightly failing image)
In case of null this cannot work of course. Here we look at the distance of the parameter type to Object instead. While the none-null case is the most possible special method we take for the null part the most general one instead. In Java you would normally have the static type or use a cast to ensure what is to be selected. In Groovy we don't have a static type and what is most special can often not be decided correctly. Thus we decided for the most general approach instead for that case. It works really well in general.
Object then is kind of like a fallback, that allows you central null handling. In future versions we may allow the usage of an explicit null type, which then would be preferred over Object if there.
While you can often see directly the distance approach for classes, it is a bit more complicated for interfaces. Basically the algorithm goes like this: If my current class directly implements the interface we are looking for, then it is a match with distance 1. If any of the interfaces the class implements has the interface we look for as parent, then count the "hops" till we are there as distance. But we look for the shortest distance. So we also look the same way at the super class. Any search result from there will have that distance +1 (for the super class hop). If the super class search gives a shorter distance than the search on the implementing interfaces, the super class search result will be taken instead.
As for handling null with interfaces... The distance to Object is here 1 if the interface does not extend another. If it does it is the distance of the parent interface +1. If multiple interfaces are extended, it is the shortest path again.
Let us look at List and Integer for null.
List extends Collection, Collection extend Iterable, Iterable has no parent. That makes a distance of 1 for Iterable, 2 for Collection and finally 3 for List.
Integer extends Number, Number extends Object. Since we hop two times we have a distance of 6 here (2x3), being much bigger than the other case. Yes, that means in general we prefer interfaces. We do that for practical reasons actually, since this way proofed to be most near actual programming practice.
I'm checking out Sharp Architecture's code. So far it's cool, but I'm having problems getting my head around how to implement DDD value objects in the framework (doesn't seem to be anything mentioning this in the code). I'm assuming the base Entity class and Repository base are to be used for entities only. Any ideas on how to implement value objects in the framework?
In Sharp Arch there is a class ValueObject in namespace SharpArch.Domain.DomainModel. This object inherits from BaseObject and overrides the == and != operators and the Equals() and GetHashCode() methods. The method overrides just calls the BaseObject versions of those two methods which in turn uses GetTypeSpecificSignatureProperties() method to get the properties to use in the equality comparison.
Bottom line is that Entity's equality is determined by
Reference equality
Same type?
Id's are the same
Comparison of all properties decorated with the [DomainSignature] attribute
For ValueObjects, the BaseObject's Equals method is used
Reference equality
Same type?
Compare all public properties
This is a little bit simplified, I suggest you get the latest code from github and read through the code in the mentioned 3 classes yourself.
Edit: Regarding persistence, this SO question might help. Other than that, refer to the official NH and Fluent NH documentation
Value objects are simple objects that don't require a base class. (The only reason entities have base classes is to provide equality based on the identity). Implementing a value object just means creating a class to represent a value from your domain. A lot of times value objects should be immutable and provide equality comparison methods to determine equality to other value objects of the same type. Take a look here.