What are these explicit "forall"s doing? - haskell

What is the purpose of the foralls in this code?
class Monad m where
(>>=) :: forall a b. m a -> (a -> m b) -> m b
(>>) :: forall a b. m a -> m b -> m b
-- Explicit for-alls so that we know what order to
-- give type arguments when desugaring
(some code omitted). This is from the code for Monads.
My background: I don't really understand forall or when Haskell has them implicitly.
Also, and it may not be significant, but GHCi allows me to omit the forall when giving >> a type:
Prelude> :t (>>) :: Monad m => m a -> m b -> m b
(>>) :: Monad m => m a -> m b -> m b
:: (Monad m) => m a -> m b -> m b
(no error).

My background: I don't really understand forall or when Haskell has them implicitly.
Okay, consider the type of id, a -> a. What does a mean, and where does it come from? When you define a value, you can't just use arbitrary variables that aren't defined anywhere. You need a top-level definition, or a function argument, or a where clause, &c. In general, if you use a variable, it must be bound somewhere.
The same is true of type variables, and forall is one such way to bind a type variable. Anywhere you see a type variable that isn't explicitly bound (for example, class Foo a where ... binds a inside the class definition), it's implicitly bound by a forall.
So, the type of id is implicitly forall a. a -> a. What does this mean? Pretty much what it says. We can get a type a -> a for all possible types a, or from another perspective, if you pick any specific type you can get a type representing "functions from your chosen type to itself". The latter phrasing should sound a bit like defining a function, and as such you can think of forall as being similar to a lambda abstraction for types.
GHC uses various intermediate representations during compilation, and one of the transformations it applies is making the similarity to functions more direct: implicit foralls are made explicit, and anywhere a polymorphic value is used for a specific type, it is first applied to a type argument.
We can even write both foralls and lambdas as one expression. I'll abuse notation for a moment and replace forall a. with /\a => for visual consistency. In this style, we can define id = /\a => \(x::a) -> (x::a) or something similar. So, an expression like id True in your code would end up translated to something like id Bool True instead; just id True would no longer even make sense.
Just as you can reorder function arguments, you can likewise reorder the type arguments, subject only to the (rather obvious) restriction that type arguments must come before any value arguments of that type. Since implicit foralls are always the outermost layer, GHC could potentially choose any order it wanted when making them explicit. Under normal circumstances, this obviously doesn't matter.
I'm not sure exactly what's going on in this case, but based on the comments I would guess that the conversion to using explicit type arguments and the desugaring of do notation are, in some sense, not aware of each other, and therefore the order of type arguments is specified explicitly to ensure consistency. After all, if something is blindly applying two type arguments to an expression, it matters a great deal whether that expression's type is forall a b. m a -> m b -> m b or forall b a. m a -> m b -> m b!

Related

Confusing about Haskell type inference

I have just started learning Haskell. As Haskell is static typed and has polymorphic type inference, the type of the identity function is
id :: a -> a
suggesting id can take any type as its parameter and return itself. It works fine when I try:
a = (id 1, id True)
I just suppose that at compile time, the first id is Num a :: a -> a, and the second id is Bool -> Bool. When I try the following code, it gives an error:
foo f a b = (f a, f b)
result = foo id 1 True
It shows the type of a must be the same type of b, since it works fine with
result = foo id 1 2
But is that true that the type of id's parameter can be polymorphic, so that a and b can be different type?
All right, this is a weird spooky corner of Haskell's type system. The problem here is that there are two ways to type inference your function foo.
-- rank 1
foo :: forall a b. (a -> b) -> a -> a -> (b, b)
foo f a b = (f a, f b)
-- rank 2
foo' :: (forall a. a -> a) -> a -> b -> (a, b)
foo' f a b = (f a, f b)
The second type is the one you want, but the first type is the one you're getting. The second type, as amalloy pointed out, is a rank-2 type (we're going to ignore what the two means but read the introduction in "Practical type inference for arbitrary-rank types" if you want a good explanation of ranks – don't be put off by the academic nature of the PDF file as the beginning is accessibly and clearly written).
We'll defer the definition of higher-ranked types for now and just say that the problem is that GHC is unable to infer the rank-2 type. Quote the paper:
Complete type inference is known to be undecidable for higher-rank (impredicative) type systems, but in practice programmers are more than willing to add type annotations to guide the type inference engine, and to document their code....
Kfoury and Wells show that typeability is decidable for rank ≤ 2, and undecidable for all ranks ≥ 3 (Kfoury & Wells, 1994). For the rank-2 fragment, the same paper gives a type inference algorithm. This inference algorithm is somewhat subtle, does not interact well with user-supplied type annotations, and has not, to our knowledge, been implemented in a production compiler.
Undecidable means there can be no algorithm that always leads to a correct yes-or-no decision. So there you have it: impossible to infer a rank-3-or-higher type, and it's too gosh-darn-hard to infer the rank-2 type.
Now, back to rank 2. The (forall a. a -> a) is what makes it rank-2. There's already an excellent Stack Overflow question about what the forall keyword means so I'll refer you to that, but basically it means you're able to call f a and f b in the expression (f a, f b) while having a and b be different types, which is what you wanted in the first place, before all this hot mess.
One last thing: The reason you don't normally see foralls in GHCi is that any foralls on the very outer scope are left off. So forall a b. (a -> b) -> a -> a -> (b, b) is equivalent to (a -> b) -> a -> a -> (b, b).
Overall this is a pain point of the language that's poorly explained.
(Hat tip to #amalloy in the comments.)

Relationship between Haskell's 'forall' and '=>'

I'm having trouble wrapping my mind around the relationship (and interactions) between Haskell's forall and => (and for that matter the . that often connects them).
For example
λ> :t (+)
λ> :t id
give
(+) :: forall a. Num a => a -> a -> a
id :: forall a. a -> a
and while I understand how these work in these specific cases, I'm not comfortable parsing the expressions (signatures?) forall a. Num a => or forall a. themselves into something meaningful, or that I can generally understand in more complex contexts.
What do forall a. Num a => and forall a. mean? Specifically, what is the roles played in each by forall, => and a?
(As another perspective, without invoking the "implicit dictionary passing" implementation of type classes):
forall a. in Haskell means "for every type a".1 It's introducing a type variable, and declaring that the rest of the type expression has to be valid whatever choice is made for a.
You usually don't see it in basic Haskell (without turning on any extensions in GHC), because it's not necessary; you just use type variables in your type signature, and GHC automatically assumes there are foralls introducing those variables at the start of the expression.
For example:
zip :: forall a. ( forall b. ( [a] -> [b] -> [(a, b)] ))
zip :: forall a. forall b. [a] -> [b] -> [(a, b)]
zip :: forall a b. [a] -> [b] -> [(a, b)]
zip :: [a] -> [b] -> [(a, b)]
The above are all the same; they just tell us that zip can be a way of zipping a list of a together with a list of b to make a list of (a, b) pairs, whatever choice we feel like making for a and b.
forall mainly comes into play with extensions, because then you can introduce type variables with scopes other than the default ones assumed by GHC if you don't explicitly write them.
Now, the constraints => type syntax can be read roughly as "these constraints imply this type", or "provided these constraints hold, you can use this type". It's used all the time, even in vanilla Haskell with no extensions, so it's important to understand what it means and how it works and not just copy and paste and hope.
The => arrow allows us to state a set of constraints on the variables in the rest of the type expression; it lets us put limitations on what choices can be made to introduce the type variable. You should read it first by ignoring everything left of the => arrow, and reading the the right part on its own. This gives you the "shape" of the type. The stuff to the left of the => arrow tells you what kind of types you can use the rest of the type with.
An example:
(+) :: Num a => a -> a -> a
This means that (+) is exactly the same kind of thing as anything with a simpler type like a -> a -> a, except the Num a => is telling us that we're not free to choose any type a. We can only choose a type for a when we know that it is a member of the Num type class (another slightly more precise way of saying "a is a member of Num is "the constraint Num a holds").
Note that GHC is still assuming that there's an implicit forall a to introduce the type variable a here, so it really looks like:
(+) :: forall a. Num a => a -> a -> a
In which case you can read this off moderately easily as an English sentence once you know what forall a. and Num a => means: "For every type a, provided Num a holds, plus has the type a -> a -> a".
1 If you're familiar with formal logic at all, it's just an ASCII-friendly way of writing ∀a, a "universally quantified variable".
As the forall matter appears to be settled, I'll attempt to explain the => a bit. The things to the left of the => are arguments, much like ones to the left of a ->. But you don't apply these arguments manually, and they can only have specific types.
f :: Num a => a -> a
is a function that takes two arguments:
A Num a dictionary.
An a.
When you apply f, you just provide the a. GHC has to provide the Num a. If it's applied to a specific concrete type like Int, GHC knows Num Int and can supply it at the call site. Otherwise, it checks that Num a is provided by some outer context and uses that one. The great thing about Haskell's typeclass system is that it ensures that any two Num a dictionaries, however they are found, will be identical. So it doesn't matter where the dictionary comes from—it is sure to be the right one.
Further discussion
A lot of these things we're talking about aren't exactly part of Haskell so much as they're part of the way GHC interprets Haskell by translation to GHC core, AKA System FC, an extension of the very well-studied System F, AKA the Girard-Reynolds calculus. System FC is an explicitly typed polymorphic lambda calculus with algebraic datatypes, etc., but no type inference, no instance resolution, etc. After GHC checks the types in your Haskell code, it translates that code to System FC by a thoroughly mechanical process. It can do this confidently because the type checker "decorates" the code with all the information the desugarer needs to plumb all the dictionaries around. If you have a Haskell function that looks like
foo :: forall a . Num a => a -> a -> a
foo x y= x + y
then that will translate to something that looks like
foo :: forall a . Num a -> a -> a -> a
foo = /\ (a :: *) -> \ (d :: Num a) -> \ (x :: a) -> \ (y :: a) -> (+) #a d x y
The /\ is a type lambda—it's just line a normal lambda except it takes a type variable. The # represents application of a type to a function that takes one. The + is really just a record selector. It chooses the right field from the dictionary it's passed.
I suppose it helps if we add the implied parentheses:
(+) :: ∀ a . ( Num a => (a -> (a -> a)) )
id :: ∀ a . ( a -> a )
The ∀ always goes together with a .. It's basically special syntax meaning “anything between ∀ and . are type variables that I want to introduce into the following scope”†
=> denotes what Idris calls an implicit function: Num a is a dictionary for the instance Num a, and such a dictionary is implicitly needed whenever you're adding numbers. But whether a is a type variable here that was previously introduced by some ∀, or a fixed type, doesn't really matter. You could also have
(+) :: Num Int => Int -> Int -> Int
That's just superfluous, because the compiler knows that Int is a Num instance and hence automatically (implicitly!) chooses the right dictionary.
Really, there's no particular relationship between ∀ and =>, they just happen to be used often together.
†Actually this is a type-level lambda. The type expression ∀ a . b behaves analogously to the value level expression \a -> b.

How to define a function inside haskell newtype?

I am trying to decipher the record syntax in haskell for newtype and my understanding breaks when there is a function inside newtype. Consider this simple example
newtype C a b = C { getC :: (a -> b) -> a }
As per my reasoning C is a type which accepts a function and a parameter in it's constructor.
so,
let d1 = C $ (2 *) 3
:t d1 also gives
d1 :: Num ((a -> b) -> a) => C a b
Again to check this I do :t getC d1, which shows this
getC d1 :: Num ((a -> b) -> a) => (a -> b) -> a
Why the error if I try getC d1? getC should return the function and it's parameter or at least apply the parameter.
I can't have newtype C a b = C { getC :: (a->b)->b } deriving (Show), because this won't make sense!
It's always good to emphasise that Haskell has two completely separate namespaces, the type language and the value language. In your case, there's
A type constructor C :: Type -> Type -> Type, which lives in the type language. It takes two types a, b (of kind Type) and maps them to a type C a b (also of kind Type)†.
A value constructor C :: ((a->b) -> a) -> C a b, which lives in the value language. It takes a function f (of type (a->b) -> a) and maps it to a value C f (of type C a b).
Perhaps it would be less confusing if you had
newtype CT a b = CV ((a->b) -> a)
but because for a newtype there is always exactly one value constructor (and exactly one type constructor) it makes sense to name them the same.
CV is a value constructor that accepts one function, full stop. That function will have signature (a->b) -> a, i.e. its argument is also a function, but as far as CT is concerned this doesn't really matter.
Really, it's kind of wrong that data and newtype declarations use a = symbol, because it doesn't mean the things on the left and right are “the same” – can't, because they don't even belong to the same language. There's an alternative syntax which expresses the relation better:
{-# LANGUAGE GADTs #-}
import Data.Kind
data CT :: Type -> Type -> Type where
CV :: ((a->b) -> a) -> CT a b
As for that value you tried to construct
let d1 = CV $ (\x->(2*x)) 3
here you did not pass “a function and a parameter” to CV. What you actually did‡ was, you applied the function \x->2*x to the value 3 (might as well have written 6) and passed that number to CV. But as I said, CV expects a function. What then happens is, GHC tries to interpret 6 as a function, which gives the bogus constraint Num ((a->b) -> a). What that means is: “if (a->b)->a is a number type, then...”. Of course it isn't a number type, so the rest doesn't make sense either.
†It may seem redundant to talk of “types of kind Type”. Actually, when talking about “types” we often mean “entities in the type-level language”. These have kinds (“type-level types”) of which Type (the kind of (lifted) value-level values) is the most prominent, but not the only one – you can also have type-level numbers and type-level functions – C is indeed one.Note that Type was historically written *, but this notation is deprecated because it's inconsistent (confusion with multiplication operator).
‡This is because $ has the lowest precedence, i.e. the expression CV $ (\x->(2*x)) 3 is actually parsed as CV ((\x->(2*x)) 3), or equivalently let y = 2*3 in CV y.
As per my reasoning C is a type which accepts a function and a parameter
How so? The constructor has only one argument.
Newtypes always have a single constructor with exactly one argument.
The type C, otoh, has two type parameters. But that has nothing to do with the number of arguments you can apply to the constructor.

strange existential type

http://www.iai.uni-bonn.de/~jv/mpc08.pdf - in this article I cannot understand the following declaration:
instance TreeLike CTree where
...
abs :: CTree a -> Tree a
improve :: (forall m. TreeLike m => m a) -> Tree a
improve m = abs m
what difference (forall m. TreeLike m => m a) brings (I thought TreeLike m => m a would suffice here)
why does it permit abs here, if m in m a can be any TreeLike, not just CTree?
That's a rank-2 type, not an existential type. What that type means is the argument for improve must be polymorphic. You can't pass a value of type Ctree a (for instance) to improve. It cannot be concrete in the type constructor at all. It explicitly must be polymorphic in the type constructor, with the constraint that the type constructor implement the Treelike class.
For your second question, this allows the implementation of improve to pick whichever type for m it wants to - it's the implementation's choice, and it's hidden from the caller by the type system. The implementation happens to pick Ctree for m in this case. That's totally fine. The trick is that the caller of improve doesn't get to use that information anywhere.
This has the practical result that the value cannot be constructed using details of a type - it has to use the functions in the Treelike class to construct it, instead. But the implementation gets to pick a specific type for it to work, allowing it to use details of the representation internally.
Whether m can be "any TreeLike" depends on your perspective.
From the perspective of implementing improve, it's true--m can be any TreeLike, so it picks one that's convenient, and uses abs.
From the perspective of the argument m--which is to say, the perspective of whatever is applying improve to some argument, something that's rather the opposite holds: m in fact must be able to be any TreeLike, not a single one that we choose.
Compare this to the type of numeric literals--something like (5 :: forall a. Num a => a) means that it's any Num instance we want it to be, but if a function expects an argument of type (forall a. Num a => a) it wants something that can be any Num instance it chooses. So we could give it a polymorphic 5 but not, say, the Integer 5.
You can, in many ways, think of polymorphic types as meaning that the function takes a type as an extra argument, which tells it what specific type we want to use for each type variable. So to see the difference between (forall m. TreeLike m => m a) -> Tree a and forall m. TreeLike m => m a -> Tree a you can read them as something like (M -> M a) -> Tree a vs. M -> M a -> Tree a.

Polymorphic signature for non-polymorphic function: why not?

As an example, consider the trivial function
f :: (Integral b) => a -> b
f x = 3 :: Int
GHC complains that it cannot deduce (b ~ Int). The definition matches the signature in the sense that it returns something that is Integral (namely an Int). Why would/should GHC force me to use a more specific type signature?
Thanks
Type variables in Haskell are universally quantified, so Integral b => b doesn't just mean some Integral type, it means any Integral type. In other words, the caller gets to pick which concrete types should be used. Therefore, it is obviously a type error for the function to always return an Int when the type signature says I should be able to choose any Integral type, e.g. Integer or Word64.
There are extensions which allow you to use existentially quantified type variables, but they are more cumbersome to work with, since they require a wrapper type (in order to store the type class dictionary). Most of the time, it is best to avoid them. But if you did want to use existential types, it would look something like this:
{-# LANGUAGE ExistentialQuantification #-}
data SomeIntegral = forall a. Integral a => SomeIntegral a
f :: a -> SomeIntegral
f x = SomeIntegral (3 :: Int)
Code using this function would then have to be polymorphic enough to work with any Integral type. We also have to pattern match using case instead of let to keep GHC's brain from exploding.
> case f True of SomeIntegral x -> toInteger x
3
> :t toInteger
toInteger :: Integral a => a -> Integer
In the above example, you can think of x as having the type exists b. Integral b => b, i.e. some unknown Integral type.
The most general type of your function is
f :: a -> Int
With a type annotation, you can only demand that you want a more specific type, for example
f :: Bool -> Int
but you cannot declare a less specific type.
The Haskell type system does not allow you to make promises that are not warranted by your code.
As others have said, in Haskell if a function returns a result of type x, that means that the caller gets to decide what the actual type is. Not the function itself. In other words, the function must be able to return any possible type matching the signature.
This is different to most OOP languages, where a signature like this would mean that the function gets to choose what it returns. Apparently this confuses a few people...

Resources