types and type variable in Haskell - haskell

Scratching at the surface of Haskell's type system, ran this:
Prelude> e = []
Prelude> ec = tail "a"
Prelude> en = tail [1]
Prelude> :t e
e :: [a]
Prelude> :t ec
ec :: [Char]
Prelude> :t en
en :: Num a => [a]
Prelude> en == e
True
Prelude> ec == e
True
Somehow, despite en and ec have different types, they both test True on == e. I say "somehow" not because I am surprised (I am not), but because I don't know what is the name of rule/mechanism that allows this. It is as if the type variable "a" in the expression "[] == en" is allowed to take on value of "Num" for the evaluation. And likewise when tested with "[] == ec", it is allowed to become "Char".
The reason I'm not sure my interpretation is correct is this:
Prelude> (en == e) && (ec == e)
True
, because intuitively this implies that in the same expression e assumes both values of Num and Char "at the same time" (at least that's how I'm used to interpret the semantics of &&). Unless the "assumption" of Char only acts during the evaluation of (ec == e), and (en == e) is evaluated independently, in a separate... reduction? (I'm guessing on a terminology here).
And then comes this:
Prelude> en == es
<interactive>:80:1: error:
• No instance for (Num Char) arising from a use of ‘en’
• In the first argument of ‘(==)’, namely ‘en’
In the expression: en == es
In an equation for ‘it’: it = en == es
Prelude> es == en
<interactive>:81:7: error:
• No instance for (Num Char) arising from a use of ‘en’
• In the second argument of ‘(==)’, namely ‘en’
In the expression: es == en
In an equation for ‘it’: it = es == en
Not surprise by the exception, but surprised that in both tests, the error message complains about "the use of 'en'" - and doesn't matter if it's the first or second operand.
Perhaps an important lesson needs to be learned about Haskell type system. Thank you for your time!

When we say that e :: [a], it means that e is a list of elements of any type. Which type? Any type! Whichever type you happen to need at the moment.
If you're coming from a non-ML language, this might be a bit easier to understand by looking at a function (rather than a value) first. Consider this:
f x = [x]
The type of this function is f :: a -> [a]. This means, roughly, that this function works for any type a. You give it a value of this type, and it will give you back a list with elements of that type. Which type? Any type! Whichever you happen to need.
When I call this function, I effectively choose which type I want at the moment. If I call it like f 'x', I choose a = Char, and if I call it like f True, I choose a = Bool. So the important point here is that whoever calls a function chooses the type parameter.
But I don't have to choose it just once and for all eternity. Instead, I choose the type parameter every time I call the function. Consider this:
pair = (f 'x', f True)
Here I'm calling f twice, and I choose different type parameters every time - first time I choose a = Char, and second time I choose a = Bool.
Ok, now for the next step: when I choose the type parameter, I can do it in several ways. In the example above, I choose it by passing a value parameter of the type I want. But another way is to specify the type of result I want. Consider this:
g x = []
a :: [Int]
a = g 0
b :: [Char]
b = g 42
Here, the function g ignores its parameter, so there is no relation between its type and the result of g. But I am still able to choose the type of that result by having it constrained by the surrounding context.
And now, the mental leap: a function without any parameters (aka a "value") is not that different from a function with parameters. It just has zero parameters, that's all.
If a value has type parameters (like your value e for example), I can choose that type parameter every time I "call" that value, just as easily as if it was a function. So in the expression e == ec && e == en you're simply "calling" the value e twice, choosing different type parameters on every call - much like I've done in the pair example above.
The confusion about Num is an altogether different matter.
You see, Num is not a type. It's a type class. Type classes are sort of like interfaces in Java or C#, except you can declare them later, not necessarily together with the type that implements them.
So the signature en :: Num a => [a] means that en is a list with elements of any type, as long as that type implements ("has an instance of") the type class Num.
And the way type inference in Haskell works is, the compiler will first determine the most concrete types it can, and then try to find implementations ("instances") of the required type classes for those types.
In your case, the compiler sees that en :: [a] is being compared to ec :: [Char], and it figures: "oh, I know: a must be Char!" And then it goes to find the class instances and notices that a must have an instance of Num, and since a is Char, it follows that Char must have an instance of Num. But it doesn't, and so the compiler complains: "can't find (Num Char)"
As for "arising from the use of en" - well, that's because en is the reason that a Num instance is required. en is the one that has Num in its type signature, so its presence is what causes the requirement of Num

Sometimes, it is convenient to think about polymorphic functions as functions taking explicit type arguments. Let's consider the polymorphic identity function as an example.
id :: forall a . a -> a
id x = x
We can think of this function as follows:
first, the function takes as input a type argument named a
second, the function takes as input a value x of the previously chosen type a
last, the function returns x (of type a)
Here's a possible call:
id #Bool True
Above, the #Bool syntax passes Bool for the first argument (type argument a), while True is passed as the second argument (x of type a = Bool).
A few other ones:
id #Int 42
id #String "hello"
id #(Int, Bool) (3, True)
We can even partially apply id passing only the type argument:
id #Int :: Int -> Int
id #String :: String -> String
...
Now, note that in most cases Haskell allows us to omit the type argument. I.e. we can write id "hello" and GHC will try to infer the missing type argument. Roughly it works as follows: id "hello" is transformed into id #t "hello" for some unknown type t, then according to the type of id this call can only type check if "hello" :: t, and since "hello" :: String, we can infer t = String.
Type inference is extremely common in Haskell. Programmers rarely specify their type arguments, and let GHC do its job.
In your case:
e :: forall a . [a]
e = []
ec :: [Char]
ec = tail "1"
en :: [Int]
en = tail [1]
Variable e is bound to a polymorphic value. That is, it actually is a sort-of function which takes a type argument a (which can also be omitted), and returns a list of type [a].
Instead, ec does not take any type argument. It's a plain list of type [Char]. Similarly for en.
We can then use
ec == (e #Char) -- both of type [Char]
en == (e #Int) -- both of type [Int]
Or we can let the type inference engine to determine the implicit type arguments
ec == e -- #Char inferred
en == e -- #Int inferred
The latter can be misleading, since it seems that ec,e,en must have the same type. In fact, they have not, since different implicit type arguments are being inferred.

Related

Assigning constrained literal to a polymorphic variable

While learning haskell with Haskell Programming from first principles found an exercise that puzzles me.
Here is the short version:
For the following definition:
a) i :: Num a => a
i = 1
b) Try replacing the type signature with the following:
i :: a
The replacement gives me an error:
error:
• No instance for (Num a) arising from the literal ‘1’
Possible fix:
add (Num a) to the context of
the type signature for:
i' :: forall a. a
• In the expression: 1
In an equation for ‘i'’: i' = 1
|
38 | i' = 1
| ^
It is more or less clear for me how Num constraint arises.
What is not clear why assigning 1 to polymorphic variable i' gives the error.
Why this works:
id 1
while this one doesn't:
i' :: a
i' = 1
id i'
Should it be possible to assign a more specific value to a less specific and lose some type info if there are no issues?
This is a common misunderstanding. You probably have something in mind like, in a class-OO language,
class Object {};
class Num: Object { public: Num add(...){...} };
class Int: Num { int i; ... };
And then you would be able to use an Int value as the argument to a function that expects a Num argument, or a Num value as the argument to a function that expects an Object.
But that's not at all how Haskell's type classes work. Num is not a class of values (like, in the above example it would be the class of all values that belong to one of the subclasses). Instead, it's the class of all types that represent specific flavours of numbers.
How is that different? Well, a polymorphic literal like 1 :: Num a => a does not generate a specific Num value that can then be upcasted to a more general class. Instead, it expects the caller to first pick a concrete type in which you want to render the number, then generates the number immediately in that type, and afterwards the type never changes.
In other words, a polymorphic value has an implicit type-level argument. Whoever wants to use i needs to do so in a context where both
It is unambiguous what type a should be used. (It doesn't necessarily need to be fixed right there: the caller could also itself be a polymorphic function.)
The compiler can prove that this type a has a Num instance.
In C++, the analogue of Haskell typeclasses / polymorphic literal is not [sub]classes and their objects, but instead templates that are constrained to a concept:
#include <concepts>
template<typename A>
concept Num = std::constructible_from<A, int>; // simplified
template<Num A>
A poly_1() {
return 1;
}
Now, poly_1 can be used in any setting that demands a type which fulfills the Num concept, i.e. in particular a type that is constructible_from an int, but not in a context which requires some other type.
(In older C++ such a template would just be duck-typed, i.e. it's not explicit that it requires a Num setting but the compiler would just try to use it as such and then give a type error upon noticing that 1 can't be converted to the specified type.)
tl;dr
A value i' declared as i' :: a must be usable¹ in place of any other value, with no exception. 1 is no such a value, as it can't be used, say, where a String is expected, just to make one example.
Longer version
Let's start form a less uncontroversial scenario where you do need a type constraint:
plus :: a -> a -> a
plus x y = x + y
This does not compile, because the signature is equivalent to plus :: forall a. a -> a -> a, and it is plainly not true that the RHS, x + y, is meaningful for any common type a that x and y are inhabitants of. So you can fix the above by providing a constraint guaranteeing that + is possible between two as, and you can do so by putting Num a => right after :: (or even by giving up on polymorphic types and just change a to Int).
But there are functions that don't require any constraints on their arguments. Here's three of them:
id :: a -> a
id x = x
const :: a -> b -> a
const x _ = x
Data.Tuple.swap :: (a, b) -> (b, a)
Data.Tuple.swap (a, b) = (b, a)
You can pass anything to these functions, and they'll always work, because their definitions make no assumption whatsoever on what can be done with those objects, as they just shuffle/ditch them.
Similarly,
i' :: a
i' = 1
cannot compile because it's not true that 1 can represent a value of any type a. It can't represent a String, for instance, whereas the signature i' :: a is expressing the idea that you can put i' in any place, e.g. where a Int is expected, as much as where a generic Num is expected, or where a String is expected, and so on.
In other words, the above signature says that you can use i' in both of these statements:
j = i' + 1
k = i' ++ "str"
So the question is: just like we found some functions that have signatures not constraining their arguments in any way, do we have a value that inhabits every single type you can think of?
Yes, there are some values like that, and here are two of them:
i' :: a
i' = error ""
j' :: a
j' = undefined
They're all "bottoms", or ⊥.
(¹) By "usable" I mean that when you write it in some place where the code compiles, the code keeps compiling.

How can Haskell integer literals be comparable without being in the Eq class?

In Haskell (at least with GHC v8.8.4), being in the Num class does NOT imply being in the Eq class:
$ ghci
GHCi, version 8.8.4: https://www.haskell.org/ghc/ :? for help
λ>
λ> let { myEqualP :: Num a => a -> a -> Bool ; myEqualP x y = x==y ; }
<interactive>:6:60: error:
• Could not deduce (Eq a) arising from a use of ‘==’
from the context: Num a
bound by the type signature for:
myEqualP :: forall a. Num a => a -> a -> Bool
at <interactive>:6:7-41
Possible fix:
add (Eq a) to the context of
the type signature for:
myEqualP :: forall a. Num a => a -> a -> Bool
• In the expression: x == y
In an equation for ‘myEqualP’: myEqualP x y = x == y
λ>
It seems this is because for example Num instances can be defined for some functional types.
Furthermore, if we prevent ghci from overguessing the type of integer literals, they have just the Num type constraint:
λ>
λ> :set -XNoMonomorphismRestriction
λ>
λ> x=42
λ> :type x
x :: Num p => p
λ>
Hence, terms like x or 42 above have no reason to be comparable.
But still, they happen to be:
λ>
λ> y=43
λ> x == y
False
λ>
Can somebody explain this apparent paradox?
Integer literals can't be compared without using Eq. But that's not what is happening, either.
In GHCi, under NoMonomorphismRestriction (which is default in GHCi nowadays; not sure about in GHC 8.8.4) x = 42 results in a variable x of type forall p :: Num p => p.1
Then you do y = 43, which similarly results in the variable y having type forall q. Num q => q.2
Then you enter x == y, and GHCi has to evaluate in order to print True or False. That evaluation cannot be done without picking a concrete type for both p and q (which has to be the same). Each type has its own code for the definition of ==, so there's no way to run the code for == without deciding which type's code to use.3
However each of x and y can be used as any type in Num (because they have a definition that works for all of them)4. So we can just use (x :: Int) == y and the compiler will determine that it should use the Int definition for ==, or x == (y :: Double) to use the Double definition. We can even do this repeatedly with different types! None of these uses change the type of x or y; we're just using them each time at one of the (many) types they support.
Without the concept of defaulting, a bare x == y would just produce an Ambiguous type variable error from the compiler. The language designers thought that would be extremely common and extremely annoying with numeric literals in particular (because the literals are polymorphic, but as soon as you do any operation on them you need a concrete type). So they introduced rules that some ambiguous type variables should be defaulted to a concrete type if that allows compilation to continue.5
So what is actually happening when you do x == y is that the compiler is just picking Integer to use for x and y in that particular expression, because you haven't given it enough information to pin down any particular type (and because the defaulting rules apply in this situation). Integer has an Eq instance so it can use that, even though the most general types of x and y don't include the Eq constraint. Without picking something it couldn't possibly even attempt to call == (and of course the "something" it picks has to be in Eq or it still won't work).
If you turn on -Wtype-defaults (which is included in -Wall), the compiler will print a warning whenever it applies defaulting6, which makes the process more visible.
1 The forall p part is implicit in standard Haskell, because all type variables are automatically introduced with forall at the beginning of the type expression in which they appear. You have to turn on extensions to even write the forall manually; either ExplicitForAll just for the ability to write forall, or any one of the many extensions that actually add functionality that makes forall useful to write explicitly.
2 GHCi will probably pick p again for the type variable, rather than q. I'm just using a different one to emphasise that they're different variables.
3 Technically it's not each type that necessarily has a different ==, but each Eq instance. Some of those instances are polymorphic, so they apply to multiple types, but that only really comes up with types that have some structure (like Maybe a, etc). Basic types like Int, Integer, Double, Char, Bool, each have their own instance, and each of those instances has its own code for ==.
4 In the underlying system, a type like forall p. Num p => p is in fact much like a function; one that takes a Num instance for a concrete type as a parameter. To get a concrete value you have to first "apply the function" to a type's Num instance, and only then do you get an actual value that could be printed, compared with other things, etc. In standard Haskell these instance parameters are always invisibly passed around by the compiler; some extensions allow you to manipulate this process a little more directly.
This is the root of what's confusing about why x == y works when x and y are polymorphic variables. If you had to explicitly pass around the type/instance arguments it would be obvious what's going on here, because you would have to manually apply both x and y to something and compare the results.
5 The gist of the default rules is that if the constraints on an ambiguous type variable are:
all built-in classes
at least one of them is a numeric class (Num, Floating, etc)
then GHC will try Integer to see if that type checks and allows all other constraints to be resolved. If that doesn't work it will try Double, and if that doesn't work then it reports an error.
You can set the types it will try with a default declaration (the "default default" being default (Integer, Double)), but you can't customise the conditions under which it will try to default things, so changing the default types is of limited use in my experience.
GHCi however comes with extended default rules that are a bit more useful in an interpreter (because it has to do type inference line-by-line instead of on the whole module at once). You can turn those on in compiled code with ExtendedDefaultRules extension (or turn them off in GHCi with NoExtendedDefaultRules), but again, neither of those options is particularly useful in my experience. It's annoying that the interpreter and the compiler behave differently, but the fundamental difference between module-at-a-time compilation and line-at-a-time interpretation mean that switching either's default rules to work consistently with the other is even more annoying. (This is also why NoMonomorphismRestriction is in effect by default in the interpreter now; the monomorphism restriction does a decent job at achieving its goals in compiled code but is almost always wrong in interpreter sessions).
6 You can also use a typed hole in combination with the asTypeOf helper to get GHC to tell you what type it's inferring for a sub-expression like this:
λ :t x
x :: Num p => p
λ :t y
y :: Num p => p
λ (x `asTypeOf` _) == y
<interactive>:19:15: error:
• Found hole: _ :: Integer
• In the second argument of ‘asTypeOf’, namely ‘_’
In the first argument of ‘(==)’, namely ‘(x `asTypeOf` _)’
In the expression: (x `asTypeOf` _) == y
• Relevant bindings include
it :: Bool (bound at <interactive>:19:1)
Valid hole fits include
x :: forall p. Num p => p
with x
(defined at <interactive>:1:1)
it :: forall p. Num p => p
with it
(defined at <interactive>:10:1)
y :: forall p. Num p => p
with y
(defined at <interactive>:12:1)
You can see it tells us nice and simply Found hole: _ :: Integer, before proceeding with all the extra information it likes to give us about errors.
A typed hole (in its simplest form) just means writing _ in place of an expression. The compiler errors out on such an expression, but it tries to give you information about what you could use to "fill in the blank" in order to get it to compile; most helpfully, it tells you the type of something that would be valid in that position.
foo `asTypeOf` bar is an old pattern for adding a bit of type information. It returns foo but it restricts (this particular usage of) it to be the same type as bar (the actual value of bar is totally unused). So if you already have a variable d with type Double, x `asTypeOf` d will be the value of x as a Double.
Here I'm using asTypeOf "backwards"; instead of using the thing on the right to constrain the type of the thing on the left, I'm putting a hole on the right (which could have any type), but asTypeOf conveniently makes sure it's the same type as x without otherwise changing how x is used in the overall expression (so the same type inference still applies, including defaulting, which isn't always the case if you lift a small part of a larger expression out to ask GHCi for its type with :t; in particular :t x won't tell us Integer, but Num p => p).

Why does the type of local variables that are values affect the type of input variables in the type signature of a function?

Prelude> func f = [(show s, f == s) | s <- [0, 1..10]]
Prelude> :type func
func :: (Num a, Enum a, Show a, Eq a) => a -> [(String, Bool)]
I would expect f to just be an instance of Eq a but all the class constraints applied to s are also applied to f for some reason. Replacing s with any constant removes the relevant type constraint for f, and replacing s in the equality removes all class constraints except Eq a for f.
Can someone explain to me why does the type of local variables that are values affect the type of input variables that are values?
Eq doesn't exist in a vacuum. To compare two things for equality, you have to have two things. And, crucially, those two things have to be of the same type. In Haskell, 0 == "A" isn't just false; it's a type error. It literally doesn't make sense.
f == s
When the compiler sees this, even if it knows nothing else about the types of f and s, it knows what (==) is. (==) is a function with the following signature.
(==) :: Eq a => a -> a -> Bool
Both arguments are of the same type. So now and forevermore, for the rest of type-checking this expression, we must have f and s of the same type. Anything required of s is also required of f. And s takes values from [0, 1..10]. Your type constraints come as follows
Num is required since s takes values from a list of literal integers.
Enum is required by the [..] list enumeration syntax.
Show is required by show s.
Eq is required by the f == s equality expression.
Now, if we replace s with a constant, we get something like
func f = [(show s, f == 0) | s <- [0, 1..10]]
Now f is being compared with 0. It has no relation to s. f requires Eq (for (==)) and Num (since we're comparing against zero, a number). s, on the other hand, requires Enum, Num, Eq, and Show. In the abstract, this should actually be a type error, since we've given no indication as to which type s should be and there aren't enough clues to figure it out. But type defaulting kicks in and we'll get Integer out of it.

How to write this simple Monad?

I try to write this:
data A=A Int deriving Show
instance Monad A where
return x=A x
A x>>=f=f x
main=print a where a=A 1
I learn it from the book 《Learn You a Haskell for Great Good》:
instance Monad Maybe where
return x = Just x
Nothing >>= f = Nothing
Just x >>= f = f x
fail _ = Nothing
but got error:
a.hs:3:16: error:
? Expected kind ‘* -> *’, but ‘A’ has kind ‘*’
? In the first argument of ‘Monad’, namely ‘A’
In the instance declaration for ‘Monad A’
so how to write it? When finish, I can write
A 1>>\x->x+1
and get A 2
You can't make A an instance of Monad given its current definition.
The error message tells you that the compiler expects something of the kind * -> *. This means a type that takes a type as input, like Maybe a, IO a, or [] a. In other words, the type must be parametrically polymorphic.
In oder to get an intuitive sense of this, consider return, which has the type:
return :: a -> m a
The type argument a is unconstrained. This means that you should be able to take any value of type a and turn it into a value of the polymorphic type.
If I give you the Boolean value False, which A value will you construct from it? If I give you the string "foo", which A value will you construct from it? If I give you the function id, which A value will you construct from it?
If you want to make your type a Monad instance, you must give it a type parameter, at least like this:
data A a = A Int deriving Show
Here, the a type parameter is in the phantom role; it's not used, but now you're at least able to make it a Functor. This version of A is isomorphic to the Const functor. You can make it a Functor and Applicative instance, but can you make it a Monad?

Confused about Haskell polymorphic types

I have defined a function :
gen :: a -> b
So just trying to provide a simple implementation :
gen 2 = "test"
But throws error :
gen.hs:51:9:
Couldn't match expected type ‘b’ with actual type ‘[Char]’
‘b’ is a rigid type variable bound by
the type signature for gen :: a -> b at gen.hs:50:8
Relevant bindings include gen :: a -> b (bound at gen.hs:51:1)
In the expression: "test"
In an equation for ‘gen’: gen 2 = "test"
Failed, modules loaded: none.
So my function is not correct. Why is a not typed as Int and b not typed as String ?
This is a very common misunderstanding.
The key thing to understand is that if you have a variable in your type signature, then the caller gets to decide what type that is, not you!
So you cannot say "this function returns type x" and then just return a String; your function actually has to be able to return any possible type that the caller may ask for. If I ask your function to return an Int, it has to return an Int. If I ask it to return a Bool, it has to return a Bool.
Your function claims to be able to return any possible type, but actually it only ever returns String. So it doesn't do what the type signature claims it does. Hence, a compile-time error.
A lot of people apparently misunderstand this. In (say) Java, you can say "this function returns Object", and then your function can return anything it wants. So the function decides what type it returns. In Haskell, the caller gets to decide what type is returned, not the function.
Edit: Note that the type you're written, a -> b, is impossible. No function can ever have this type. There's no way a function can construct a value of type b out of thin air. The only way this can work is if some of the inputs also involve type b, or if b belongs to some kind of typeclass which allows value construction.
For example:
head :: [x] -> x
The return type here is x ("any possible type"), but the input type also mentions x, so this function is possible; you just have to return one of the values that was in the original list.
Similarly, gen :: a -> a is a perfectly valid function. But the only thing it can do is return it's input unchanged (i.e., what the id function does).
This property of type signatures telling you what a function does is a very useful and powerful property of Haskell.
gen :: a -> b does not mean "for some type a and some type b, foo must be of type a -> b", it means "for any type a and any type b, foo must be of type a -> b".
to motivate this: If the type checker sees something like let x :: Int = gen "hello", it sees that gen is used as String -> Int here and then looks at gen's type to see whether it can be used that way. The type is a -> b, which can be specialized to String -> Int, so the type checker decides that this is fine and allows this call. That is since the function is declared to have type a -> b, the type checker allows you to call the function with any type you want and allows you to use the result as any type you want.
However that clearly does not match the definition you gave the function. The function knows how to handle numbers as arguments - nothing else. And likewise it knows how to produce strings as its result - nothing else. So clearly it should not be possible to call the function with a string as its argument or to use the function's result as an Int. So since the type a -> b would allow that, it's clearly the wrong type for that function.
Your type signature gen :: a -> b is stating, that your function can work for any type a (and provide any type b the caller of the function demands).
Besides the fact that such a function is hard to come by, the line gen 2 = "test" tries to return a String which very well may not be what the caller demands.
Excellent answers. Given your profile, however, you seem to know Java, so I think it's valuable to connect this to Java as well.
Java offers two kinds of polymorphism:
Subtype polymorphism: e.g., every type is a subtype of java.lang.Object
Generic polymorphism: e.g., in the List<T> interface.
Haskell's type variables are a version of (2). Haskell doesn't really have a version of (1).
One way to think of generic polymorphism is in terms of templates (which is what C++ people call them): a type that has a type variable parameter is a template that can be specialized into a variety of monomorphic types. So for example, the interface List<T> is a template for constructing monomorphic interfaces like List<String>, List<List<String>> and so on, all of which have the same structure but differ only because the type variable T gets replaced uniformly throughout the signatures with the instantiation type.
The concept that "the caller chooses" that several responders have mentioned here is basically a friendly way of referring to instantiation. In Java, for example, the most common point where the type variable gets "chosen" is when an object is instantiated:
List<String> myList = new ArrayList<String>();
Second common point is that a subtype of a generic type may instantiate the supertype's variables:
class MyFunction implements Function<Integer, String> {
public String apply(Integer i) { ... }
}
Third one is methods that allow the caller to instantiate a variable that's not a parameter of its enclosing type:
/**
* Visitor-pattern style interface for a simple arithmetical language
* abstract syntax tree.
*/
interface Expression {
// The caller of `accept` implicitly chooses which type `R` is,
// by supplying a `Visitor<R>` with `R` instantiated to something
// of its choice.
<R> accept(Expression.Visitor<R> visitor);
static interface Visitor<R> {
R constant(int i);
R add(Expression a, Expression b);
R multiply(Expression a, Expression b);
}
}
In Haskell, instantiation is carried out implicitly by the type inference algorithm. In any expression where you use gen :: a -> b, type inference will infer what types need to be instantiated for a and b, given the context in which gen is used. So basically, "caller chooses" means that any code that uses gen controls the types to which a and b will be instantiated; if I write gen [()], then I'm implicitly instantiating a to [()]. The error here means that your type declaration says that gen [()] is allowed, but your equation gen 2 = "test" implies that it's not.
In Haskell, type variables are implicitly quantified, but we can make this explicit:
{-# LANGUAGE ScopedTypeVariables #-}
gen :: forall a b . a -> b
gen x = ????
The "forall" is really just a type level version of a lambda, often written Λ. So gen is a function taking three arguments: a type, bound to the name a, another type, bound to the name b, and a value of type a, bound to the name x. When your function is called, it is called with those three arguments. Consider a saner case:
fst :: (a,b) -> a
fst (x1,x2) = x1
This gets translated to
fst :: forall (a::*) (b::*) . (a,b) -> a
fst = /\ (a::*) -> /\ (b::*) -> \ (x::(a,b)) ->
case x of
(x1, x2) -> x1
where * is the type (often called a kind) of normal concrete types. If I call fst (3::Int, 'x'), that gets translated into
fst Int Char (3Int, 'x')
where I use 3Int to represent specifically the Int version of 3. We could then calculate it as follows:
fst Int Char (3Int, 'x')
=
(/\ (a::*) -> /\ (b::*) -> \(x::(a,b)) -> case x of (x1,x2) -> x1) Int Char (3Int, 'x')
=
(/\ (b::*) -> \(x::(Int,b)) -> case x of (x1,x2) -> x1) Char (3Int, 'x')
=
(\(x::(Int,Char)) -> case x of (x1,x2) -> x1) (3Int, x)
=
case (3Int,x) of (x1,x2) -> x1
=
3Int
Whatever types I pass in, as long as the value I pass in matches, the fst function will be able to produce something of the required type. If you try to do this for a->b, you will get stuck.

Resources