Proper Uses for Abstracts - haxe

Just yesterday, I decided to begin learning the Haxe programming language after having used Actionscript 3 for the past few years. Today I have been exploring abstract types, and I have come to realize that they seem quite different from abstract classes in Java. I am beginning to grasp some of what they do, but I am unsure of what abstracts are used for. What constitutes the proper use of abstracts in Haxe, and when ought I to favor them over classes?
For instance, below is an incomplete definition for a complex number type using an abstract type. Ought I to prefer this or just an ordinary class?
abstract Complex({real:Float, imag:Float}) {
public function new(real:Float, imag:Float) {
this = { real: real, imag: imag };
}
public function real():Float { return this.real; }
public function imag():Float { return this.imag; }
#:op(A + B)
public static function add(lhs:Complex, rhs:Complex):Complex {
return new Complex(lhs.real() + rhs.real(), lhs.imag() + rhs.imag());
}
public function toString():String {
return real() + " + " + imag() + "i";
}
}

Indeed abstracts are not at all like abstract classes in Java. Abstract types in Haxe are powerful and interesting. Their main characteristic is that they are types that exist only at compile-time. At runtime they are entirely replaced by the wrapped type. Methods are transformed into static functions. In the case you described all of your instances will be replaced by anonymous objects with the two fields real and imag. Is that a good use case? Probably yes since a Complex type is not meant to be extended and you probably want to define some operator overloading (as you did for the addition).
To keep it even more light-weight you could use an Array<Float> as the wrapped type where the first element is the real part and the second the imaginary one.
So what is good about abstract types?
they add semantic to types (particularly primitive types). For example you could define an abstract RGB(Int) {} to always output very efficient color encoding with the benefit of methods and properties. Or you could have an abstract Path(String) {} to conveniently deal with path concatenation, relative paths and the like.
you can define operator overloading. In the case above you could do something like white + black and get something meaningful out of it.
similarly to operator overloading, abstracts can define implicit casts from and to other types. In the case of the RGB above you could easily define a method fromString() to parse an hex string into an Int representing a color. With the implicit cast you could do: var color : RGB = "#669900";. thx.color defines a lot of abstracts for color handling.
they are ideal to wrap the very powerful Enums in Haxe. With an abstract you can add methods and properties to enumerations (that natively do not support any of that).
they are ideal to wrap optimized code. Abstract methods can be inlined and the wrapped type ensures that you are not adding any additional layer of indirection when executing your code.
What is not so good? Or better, what we should know about abstracts?
since they are just a compile-time artifact, you cannot use runtime checks (eg: no Std.is(value, MyAbstract)).
abstracts are not classes, so no inheritance.

Related

Examples of languages that hide variable multiplicity

What are some examples of programming languages, extensions to programming languages or other solutions that hides the multiplicity of variables when operating on them, calling method etc?
Specifically I imagine a system where I have a single typed collection of objects that transparently will forward any method call on the collection of objects so that the method is applied to all of them individually including using the return value in a meaningful way. Preferably I would like to see examples of languages that does this in a good way, but it could be interesting to see also solutions where this does not work well.
I imagine something like this:
struct Foo
{
int bar();
}
void myFunction()
{
// 4 Foo objects are created in a vector
vector<Foo> vals(4);
// The bar() method is applied to each of the Foo objects and each
// return an int that is automatically inserted into a new vector
vector<int> = vals.bar();
}
Take a look at Java 8 streams. Basically, you'd "stream" the container's contents, and indicate to the stream that each thing that goes through should have the method Foo::bar applied to it.
vals.stream().forEach(Foo::bar);
A lot of these concepts come from earlier languages, including Lisp (list processing).

What overloaded method is chosen by Groovy when null is passed as a parameter?

class SimpleTest {
void met( Object a ) {
println "Object"
}
void met( String b ) {
println "String"
}
static main( args ) {
SimpleTest i = new SimpleTest()
i.met(null)
}
}
This code will produce the output "Object". It will not choose the most specialized version of the method. In this case String is more specialized than Object, so this rule does not apply.
Groovy uses a distance calculation approach. Basically if you imagine the classes and interfaces as nodes in a graph and them being connected by their inheritance relationship, then we kind of look for the distance from our given argument type (the runtime type of the argument) to our parameter type (the static type the method parameter has). The connections have different weights, basically going to the super class means a distance of I think 3, to an interface 1, wrapping a primitive is also 1, vargs wrapping has also a weight (and cannot really be represented in the graph anymore, so sorry for the slightly failing image)
In case of null this cannot work of course. Here we look at the distance of the parameter type to Object instead. While the none-null case is the most possible special method we take for the null part the most general one instead. In Java you would normally have the static type or use a cast to ensure what is to be selected. In Groovy we don't have a static type and what is most special can often not be decided correctly. Thus we decided for the most general approach instead for that case. It works really well in general.
Object then is kind of like a fallback, that allows you central null handling. In future versions we may allow the usage of an explicit null type, which then would be preferred over Object if there.
While you can often see directly the distance approach for classes, it is a bit more complicated for interfaces. Basically the algorithm goes like this: If my current class directly implements the interface we are looking for, then it is a match with distance 1. If any of the interfaces the class implements has the interface we look for as parent, then count the "hops" till we are there as distance. But we look for the shortest distance. So we also look the same way at the super class. Any search result from there will have that distance +1 (for the super class hop). If the super class search gives a shorter distance than the search on the implementing interfaces, the super class search result will be taken instead.
As for handling null with interfaces... The distance to Object is here 1 if the interface does not extend another. If it does it is the distance of the parent interface +1. If multiple interfaces are extended, it is the shortest path again.
Let us look at List and Integer for null.
List extends Collection, Collection extend Iterable, Iterable has no parent. That makes a distance of 1 for Iterable, 2 for Collection and finally 3 for List.
Integer extends Number, Number extends Object. Since we hop two times we have a distance of 6 here (2x3), being much bigger than the other case. Yes, that means in general we prefer interfaces. We do that for practical reasons actually, since this way proofed to be most near actual programming practice.

Is there a language where types can take content of fields into account?

I had this crazy idea and was wondering if such a thing exists:
Usually, in a strongly typed language, types are mainly concerned with memory layout, or membership to an abstract 'class'. So class Foo {int a;} and class Bar {int a; int b;} are distinct, but so is class Baz {int a; int b;} (although it has the same layout, it's a different type). So far, so good.
I was wondering if there is a language that allows one to specify more fine grained contraints as to what makes a type. For example, I'd like to have:
class Person {
//...
int height;
}
class RollercoasterSafe: Person (where .height>140) {}
void ride(RollercoasterSafe p) { //... }
and the compiler would make sure that it's impossible to have p.height < 140 in ride. This is just a stupid example, but I'm sure there are use cases where this could really help. Is there such a thing?
It depends on whether the predicate is checked statically or dynamically. In either case the answer is yes, but the resulting systems look different.
On the static end: PL researchers have proposed the notion of a refinement type, which consists of a base type together with a predicate: http://en.wikipedia.org/wiki/Program_refinement. I believe the idea of refinement types is that the predicates are checked at compile time, which means that you have to restrict the language of predicates to something tractable.
It's also possible to express constraints using dependent types, which are types parameterized by run-time values (as opposed to polymorphic types, which are parameterized by other types).
There are other tricks that you can play with powerful type systems like Haskell's, but IIUC you would have to change height from int to something whose structure the type checker could reason about.
On the dynamic end: SQL has something called domains, as in CREATE DOMAIN: http://developer.postgresql.org/pgdocs/postgres/sql-createdomain.html (see the bottom of the page for a simple example), which again consist of a base type and a constraint. The domain's constraint is checked dynamically whenever a value of that domain is created. In general, you can solve the problem by creating a new abstract data type and performing the check whenever you create a new value of the abstract type. If your language allows you to define automatic coercions from and to your new type, then you can use them to essentially implement SQL-like domains; if not, you just live with plain old abstract data types instead.
Then there are contracts, which are not thought of as types per se but can be used in some of the same ways, such as constraining the arguments and results of functions/methods. Simple contracts include predicates (eg, "accepts a Person object with height > 140"), but contracts can also be higher-order (eg, "accepts a Person object whose makeSmallTalk() method never returns null"). Higher-order contracts cannot be checked immediately, so they generally involve creating some kind of proxy. Contract checking does not create a new type of value or tag existing values, so the dynamic check will be repeated every time the contract is performed. Consequently, programmers often put contracts along module boundaries to minimize redundant checks.
An example of a language with such capabilities is Spec#. From the tutorial documentation available on the project site:
Consider the method ISqrt in Fig. 1, which computes the integer square root of
a given integer x. It is possible to implement the method only if x is non-negative, so
int ISqrt(int x)
requires 0 <= x;
ensures result*result <= x && x < (result+1)*(result+1);
{
int r = 0;
while ((r+1)*(r+1) <= x)
invariant r*r <= x;
{
r++;
}
return r;
}
In your case, you could probably do something like (note that I haven't tried this, I'm just reading docs):
void ride(Person p)
requires p.height > 140;
{
//...
}
There may be a way to roll up that requires clause into a type declaration such as RollercoasterSafe that you have suggested.
Your idea sounds somewhat like C++0x's concepts, though not identical. However, concepts have been removed from the C++0x standard.
I don't know any language that supports that kind of thing, but I don't find it really necessary.
I'm pretty convinced that simply applying validation in the setters of the properties may give you all the necessary restrictions.
In your RollercoasterSafe class example, you may throw an exception when the value of height property is set to a value less than 140. It's runtime checking, but polymorphism can make compile-time checking impossible.

Why .NET 4 variance for generic type arguments not also for classes? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Why isn't there generic variance for classes in C# 4.0?
Why does C# (4.0) not allow co- and contravariance in generic class types?
The new .NET 4.0 co- and contravariance for generic type arguments only works for interfaces and delegates. What is the reason for not supporting it for classes too?
For type safety, C# 4.0 supports covariance/contravariance ONLY for type parameters marked with in or out.
If this extended to classes, you'd also have to mark type parameters with in our out and this would end up being very restrictive. This is most likely why the designers of the CLR chose not to allow it. For instance, consider the following class:
public class Stack<T>
{
int position;
T[] data = new T[100];
public void Push (T obj) { data[position++] = obj; }
public T Pop() { return data[--position]; }
}
It would be impossible to annotate T as either in our out, because T is used in both input and output positions. Hence this class could never covariant or contravariant - even in C# supported covariance/contravariant type parameters for classes.
Interfaces solve the problem nicely. We can have define two interfaces as follows, and have Stack implement both:
public interface IPoppable<out T> { T Pop(); }
public interface IPushable<in T> { void Push (T obj); }
Note that T is covariant for IPoppable and contravariant for IPushable. This means T can be either covariant or contravariant - depending on whether you cast to IPoppable or IPushable.
Another reason that covariance/contravariance would be of limited use with classes is it would rule out using type parameters as fields - because fields effectively allow both input and output operations. In fact, it would be hard to write a class that does anything useful at all with a type parameter marked as in or out. Even the simplest case of writing a covariant Enumerable implementation would present a challenge - how would you get source data into the instance to begin with?
The .NET team along with the C# and VB.NET team has limited resources, the work they have done on co- and contravariance solves most of the real world problem. Type systems are very complex to get right – a solution that works in 99.9999% of cases is not good enough if it leads to unsafe code in the other cases.
I don’t think the cost/time of supporting co- and contravariance specs (e.g. “in”/”out”) on class methods is of a great enough value. I can see very few cases when they would be useable – due to the lack of multiply class inheritance.
Would you rather had waited for another 6 months for .net so as to get this support?
Another way to think of this is that in .net
Interfaces / delegates – are used to model the conceptual type system of an application
Class are used to implement the above types
Class inheritance is used to reduce code duplication while doing the above
co- and contravariance is about the conceptual type system of an application

Is it a bad idea to use the new Dynamic Keyword as a replacement switch statement?

I like the new Dynamic keyword and read that it can be used as a replacement visitor pattern.
It makes the code more declarative which I prefer.
Is it a good idea though to replace all instances of switch on 'Type' with a class that implements dynamic dispatch.
class VistorTest
{
public string DynamicVisit(dynamic obj)
{
return Visit(obj);
}
private string Visit(string str)
{
return "a string was called with value " + str;
}
private string Visit(int value)
{
return "an int was called with value " + value;
}
}
It really depends on what you consider a "good idea".
This works, and it works in a fairly elegant manner. It has some advantages and some disadvantages to other approaches.
On the advantage side:
It's concise, and easy to extend
The code is fairly simple
For the disadvantages:
Error checking is potentially more difficult than a classic visitor implementation, since all error checking must be done at runtime. For example, if you pass visitorTest.DynamicVisit(4.2);, you'll get an exception at runtime, but no compile time complaints.
The code may be less obvious, and have a higher maintenance cost.
Personally, I think this is a reasonable approach. The visitor pattern, in a classic implementation, has a fairly high maintenance cost and is often difficult to test cleanly. This potentially makes the cost slightly higher, but makes the implementation much simpler.
With good error checking, I don't have a problem with using dynamic as an approach here. Personally, I'd probably use an approach like this, since the alternatives that perform in a reasonable manner get pretty nasty otherwise.
However, there are a couple of changes I would make here. First, as I mentioned, you really need to include error checking.
Second, I would actually make DynamicVisit take a dynamic directly, which might make it (slightly) more obvious what's happening:
class VistorTest
{
public string DynamicVisit(dynamic obj)
{
try
{
return Visit(obj);
}
catch (RuntimeBinderException e)
{
// Handle the exception here!
Console.WriteLine("Invalid type specified");
}
return string.Empty;
}
// ...Rest of code
The visitor pattern exists primarily to work around the fact that some languages do not allow double dispatch and multiple dispatch.
Multiple dispatch or multimethods is the feature of some object-oriented programming languages in which a function or method can be dynamically dispatched based on the run time (dynamic) type of more than one of its arguments. This is an extension of single dispatch polymorphism where a method call is dynamically dispatched based on the actual derived type of the object. Multiple dispatch generalizes the dynamic dispatching to work with a combination of two or more objects.
Until version 4, C# was one of those languages. With the introduction of the dynamic keyword, however, C# allows developers to opt-in to this dispatch mechanism just as you've shown. I don't see anything wrong with using it in this manner.
You haven't changed the type safety at all, because even a switch (or more likely dispatch dictionary, given that C# does not allow switching on type) would have to have a default case that throws when it can't match a function to call, and this will do exactly the same if it can't find a suitable function to bind to.

Resources