How does Haskell type-check infinite recursive values? - haskell

Define this data type:
data NaturalNumber = Zero | S NaturalNumber
deriving (Show)
In Haskell (compiled using GHC), this code will run without warning or error:
infinity = S infinity
inf1 = S inf2
inf2 = S inf1
So both recursive and mutually recursive infinitely deep values pass the type check.
However, the following code gives an error:
j = S 'h'
The error states Couldn't match expected type ‘NaturalNumber’ with actual type 'Char'. The (same) error persists even if I set
j = S (S (S (S ... (S 'h')...)))
with a hundred or so nested S's.
How can Haskell tell that infinity is a valid member of NaturalNumber but j is not?
Interestingly, it also allows:
bottom = bottom
k = S bottom
Does Haskell merely try to prove incorrectness of a program, and if it fails to do so then allows it? Or is Haskell's type system not Turing complete, so if it allows the program then the program is provably (at the type level) correct?
(If the type system (in the formal semantics of Haskell, instead of only the type checker) is Turing complete, then it will either fail to realize some correctly typed programs are correct or some incorrectly typed programs are incorrect, due to the undecidability of the halting problem.)

Well
S :: NaturalNumber -> NaturalNumber
In
infinity = S infinity
We start by assuming nothing: we assign infinity some unsolved type _a and try to figure out what it is. We know that we have applied S to infinity, so _a must be whatever is on the left side of the arrow in the constructor’s type, which is NaturalNumber. We know that infinity is the result of an application of S, so infinity :: NaturalNumber, again (if we got two conflicting types for this definition, we’d have to emit a type error).
Similar reasoning holds for the mutually recursive definitions. inf1 must be a NaturalNumber because it appears as an argument to S in inf2; inf2 must be a NaturalNumber because it is the result of S; etc.
The general algorithm is to assign definitions unknown types (notable exceptions are literals and constructors), and then to create constraints on those types by seeing how every definition is used. E.g. this must be some form of list because it’s being reversed, and this must be an Int because it’s being used to look up a value from an IntMap, etc.
In the case of
oops = S 'a'
'a' :: Char as it’s a literal, but, also, we must have 'a' :: NaturalNumber because it’s used as an argument to S. We get the obviously bogus constraint that the type of the literal must both be Char and NaturalNumber, which causes a type error.
And in
bottom = bottom
We start with bottom :: _a. The only constraint is _a ~ _a, because a value of type _a (bottom) is being used where a value of type _a is expected (on the RHS of the definition of bottom). Since there is nothing to further constrain the type, the unsolved type variable is generalized: it gets bound by a universal quantifier to produce bottom :: forall a. a.
Note how both uses of bottom above have the same type (_a) while inferring the type of bottom. This breaks polymorphic recursion: every occurrence of a value within its definition is taken to be of the same type as the definition itself. E.g.
-- perfectly balanced binary trees
data Binary a = Leaf a | Branch (Binary (a, a))
-- headB :: _a -> _r
headB (Leaf x) = x -- _a ~ Binary _r; headB :: Binary _r -> _r
headB (Branch bin) = fst (headB bin)
-- recursive call has type headB :: Binary _r -> _r
-- but bin :: Binary (_r, _r); mismatch
So you need a type signature:
headB :: {-forall a.-} Binary a -> a
headB (Leaf x) = x
headB (Branch bin) = fst (headB {-#(a, a)-} bin)
-- knowing exactly what headB's signature is allows for polymorphic recursion
So: when something doesn't have a type signature, the type checker tries to assign it a type, and if it comes across a bogus constraint on its way, it rejects the program. When something has a type signature, the type checker descends into it to make sure it's correct (tries to prove it wrong, if you prefer to think of it that way).
Haskell’s type system is not Turing complete because there are heavy syntactic restrictions to prevent e.g. type lambdas (without language extensions), but it doesn’t suffice to make sure all programs run to completion without error because it still allows bottoms (not to mention all the unsafe functions). It provides the weaker guarantee that, if a program runs to completion without using an unsafe function, it will remain type-correct. Under GHC, with sufficient language extensions, the type system does become Turing complete. I don't think it allows ill-typed programs through; I think the most you can do is throw the compiler into an infinite loop.

Related

Can we tweak "a -> a" function in Haskell?

In Haskell id function is defined on type level as id :: a -> a and implemented as just returning its argument without any modification, but if we have some type introspection with TypeApplications we can try to modify values without breaking type signature:
{-# LANGUAGE AllowAmbiguousTypes, ScopedTypeVariables, TypeApplications #-}
module Main where
class TypeOf a where
typeOf :: String
instance TypeOf Bool where
typeOf = "Bool"
instance TypeOf Char where
typeOf = "Char"
instance TypeOf Int where
typeOf = "Int"
tweakId :: forall a. TypeOf a => a -> a
tweakId x
| typeOf #a == "Bool" = not x
| typeOf #a == "Int" = x+1
| typeOf #a == "Char" = x
| otherwise = x
This fail with error:
"Couldn't match expected type ‘a’ with actual type ‘Bool’"
But I don't see any problems here (type signature satisfied):
My question is:
How can we do such a thing in a Haskell?
If we can't, that is theoretical\philosophical etc reasons for this?
If this implementation of tweak_id is not "original id", what are theoretical roots that id function must not to do any modifications on term level. Or can we have many implementations of id :: a -> a function (I see that in practice we can, I can implement such a function in Python for example, but what the theory behind Haskell says to this?)
You need GADTs for that.
{-# LANGUAGE ScopedTypeVariables, TypeApplications, GADTs #-}
import Data.Typeable
import Data.Type.Equality
tweakId :: forall a. Typeable a => a -> a
tweakId x
| Just Refl <- eqT #a #Int = x + 1
-- etc. etc.
| otherwise = x
Here we use eqT #type1 #type2 to check whether the two types are equal. If they are, the result is Just Refl and pattern matching on that Refl is enough to convince the type checker that the two types are indeed equal, so we can use x + 1 since x is now no longer only of type a but also of type Int.
This check requires runtime type information, which we usually do not have due to Haskell's type erasure property. The information is provided by the Typeable type class.
This can also be achieved using a user-defined class like your TypeOf if we make it provide a custom GADT value. This can work well if we want to encode some constraint like "type a is either an Int, a Bool, or a String" where we statically know what types to allow (we can even recursively define a set of allowed types in this way). However, to allow any type, including ones that have not yet been defined, we need something like Typeable. That is also very convenient since any user-defined type is automatically made an instance of Typeable.
This fail with error: "Couldn't match expected type ‘a’ with actual type ‘Bool’"But I don't see any problems here
Well, what if I add this instance:
instance TypeOf Float where
typeOf = "Bool"
Do you see the problem now? Nothing prevents somebody from adding such an instance, no matter how silly it is. And so the compiler can't possibly make the assumption that having checked typeOf #a == "Bool" is sufficient to actually use x as being of type Bool.
You can squelch the error if you are confident that nobody will add malicious instances, by using unsafe coercions.
import Unsafe.Coerce
tweakId :: forall a. TypeOf a => a -> a
tweakId x
| typeOf #a == "Bool" = unsafeCoerce (not $ unsafeCoerce x)
| typeOf #a == "Int" = unsafeCoerce (unsafeCoerce x+1 :: Int)
| typeOf #a == "Char" = unsafeCoerce (unsafeCoerce x :: Char)
| otherwise = x
but I would not recommend this. The correct way is to not use strings as a poor man's type representation, but instead the standard Typeable class which is actually tamper-proof and comes with suitable GADTs so you don't need manual unsafe coercions. See chi's answer.
As an alternative, you could also use type-level strings and a functional dependency to make the unsafe coercions safe:
{-# LANGUAGE DataKinds, FunctionalDependencies
, ScopedTypeVariables, UnicodeSyntax, TypeApplications #-}
import GHC.TypeLits (KnownSymbol, symbolVal)
import Data.Proxy (Proxy(..))
import Unsafe.Coerce
class KnownSymbol t => TypeOf a t | a->t, t->a
instance TypeOf Bool "Bool"
instance TypeOf Int "Int"
tweakId :: ∀ a t . TypeOf a t => a -> a
tweakId x = case symbolVal #t Proxy of
"Bool" -> unsafeCoerce (not $ unsafeCoerce x)
"Int" -> unsafeCoerce (unsafeCoerce x + 1 :: Int)
_ -> x
The trick is that the fundep t->a makes writing another instance like
instance TypeOf Float "Bool"
a compile error right there.
Of course, really the most sensible approach is probably to not bother with any kind of manual type equality at all, but simply use the class right away for the behaviour changes you need:
class Tweakable a where
tweak :: a -> a
instance Tweakable Bool where
tweak = not
instance Tweakable Int where
tweak = (+1)
instance Tweakable Char where
tweak = id
The other answers are both very good for covering the ways you can do something like this in Haskell. But I thought it was worth adding something speaking more to this part of the question:
If we can't, that is theoretical\philosophical etc reasons for this?
Actually Haskellers do generally rely quite strongly on the theory that forbids something like your tweakId from existing with type forall a. a -> a. (Even though there are ways to cheat, using things like unsafeCoerce; this is usually considered bad style if you haven't done something like in leftaroundabout's answer, where a class with functional dependencies ensures the unsafe coerce is always valid)
Haskell uses parametric polymorphism1. That means we can write code that works on multiple types because it will treat them all the same; the code only uses operations that will work regardless of the specific type it is invoked on. This is expressed in Haskell types by using type variables; a function with a variable in its type can be used with any type at all substituted for the variable, because every single operation in the function definition will work regardless of what type is chosen.
About the simplest example is indeed the function id, which might be defined like this:
id :: forall a. a -> a
id x = x
Because it's parametrically polymorphic, we can simply choose any type at all we like and use id as if it was defined on that type. For example as if it were any of the following:
id :: Bool -> Bool
id x = x
id :: Int -> Int
id x = x
id :: Maybe (Int -> [IO Bool]) -> Maybe (Int -> [IO Bool])
id x = x
But to ensure that the definition does work for any type, the compiler has to check a very strong restriction. Our id function can only use operations that don't depend on any property of any specific type at all. We can't call not x because the x might not be a Bool, we can't call x + 1 because the x might not be a number, can't check whether x is equal to anything because it might not be a type that supports equality, etc, etc. In fact there is almost nothing you can do with x in the body of id. We can't even ignore x and return some other value of type a; this would require us to write an expression for a value that can be of any type at all and the only things that can do that are things like undefined that don't evaluate to a value at all (because they throw exceptions). It's often said that in fact there is only one valid function with type forall a. a -> a (and that is id)2.
This restriction on what you can do with values whose type contains variables isn't just a restriction for the sake of being picky, it's actually a huge part of what makes Haskell types useful. It means that just looking at the type of a function can often tell you quite a bit about what it can possibly do, and once you get used to it Haskellers rely on this kind of thinking all the time. For example, consider this function signature:
map :: forall a b. (a -> b) -> [a] -> [b]
Just from this type (and the assumption that the code doesn't do anything dumb like add in extra undefined elements of the list) I can tell:
All of the items in the resulting list come are results of the function input; map will have no other way of producing values of type b to put in the list (except undefined, etc).
All of the items in the resulting list correspond to something in the input list mapped through the function; map will have no way of getting any a values to feed to the function (except undefined, etc)
If any items of the input list are dropped or re-ordered, it will be done in a "blind" way that isn't considering the elements at all, only their position in the list; map ultimately has no way of testing any property of the a and b values to decide which order they should go in. For example it might leave out the third element, or swap the 2nd and 76th elements if there are at least 100 elements, etc. But if it applies rules like that it will have to always apply them, regardless of the actual items in the list. It cannot e.g. drop the 4th element if it is less than the 5th element, or only keep outputs from the function that are "truthy", etc.
None of this would be true if Haskell allowed parametrically polymorphic types to have Python-like definitions that check the type of their arguments and then run different code. Such a definition for map could check if the function is supposed to return integers and if so return [1, 2, 3, 4] regardless of the input list, etc. So the type checker would be enforcing a lot less (and thus catching fewer mistakes) if it worked this way.
This kind of reasoning is formalised in the concept of free theorems; it's literally possible to derive formal proofs about a piece of code from its type (and thus get theorems for free). You can google this if you're interested in further reading, but Haskellers generally use this concept informally rather doing real proofs.
Sometimes we do need non-parametric polymorphism. The main tool Haskell provides for that is type classes. If a type variable has a class constraint, then there will be an interface of class methods provided by that constraint. For example the Eq a constraint allows (==) :: a -> a -> Bool to be used, and your own TypeOf a constraint allows typeOf #a to be used. Type class methods do allow you to run different code for different types, so this breaks parametricity. Even just adding Eq a to the type of map means I can no longer assume property 3 from above.
map :: forall a b. Eq a => (a -> b) -> [a] -> [b]
Now map can tell whether some of the items in the original list are equal to each other, so it can use that to decide whether to include them in the result, and in what order. Likewise Monoid a or Monoid b would allow map to break the first two properties by using mempty :: a to produce new values that weren't in the list originally or didn't come from the function. If I add Typeable constraints I can't assume anything, because the function could do all of the Python-style checking of types to apply special-case logic, make use of existing values it knows about if a or b happen to be those types, etc.
So something like your tweakId cannot be given the type forall a. a -> a, for theoretical reasons that are also extremely practically important. But if you need a function that behaves like your tweakId adding a class constraint was the right thing to do to break out of the constraints of parametricity. However simply being able to get a String for each type isn't enough; typeOf #a == "Int" doesn't tell the type checker that a can be used in operations requiring an Int. It knows that in that branch the equality check returned True, but that's just a Bool; the type checker isn't able to reason backwards to why this particular Bool is True and deduce that it could only have happened if a were the type Int. But there are alternative constructs using GADTs that do give the type checker additional knowledge within certain code branches, allowing you to check types at runtime and use different code for each type. The class Typeable is specifically designed for this, but it's a hammer that completely bypasses parametricity; I think most Haskellers would prefer to keep more type-based reasoning intact where possible.
1 Parametric polymorphism is in contrast to class-based polymorphism you may have seen in OO languages (where each class says how a method is implemented for objects of that specific class), or ad-hoc polymophism (as seen in C++) where you simply define multiple definitions with the same name but different types and the types at each application determine which definition is used. I'm not covering those in detail, but the key distinction is both of them allow the definition to have different code for each supported type, rather than guaranteeing the same code will process all supported types.
2 It's not 100% true that there's only one valid function with type forall a. a -> a unless you hide some caveats in "valid". But if you don't use any unsafe features (like unsafeCoerce or the foreign language interface), then a function with type forall a. a -> a will either always throw an exception or it will return its argument unchanged.
The "always throws an exception" isn't terribly useful so we usually assume an unknown function with that type isn't going to do that, and thus ignore this possibility.
There are multiple ways to implement "returns its argument unchanged", like id x = head . head . head $ [[[x]]], but they can only differ from the normal id in being slower by building up some structure around x and then immediately tearing it down again. A caller that's only worrying about correctness (rather than performance) can treat them all the same.
Thus, ignoring the "always undefined" possibility and treating all of the dumb elaborations of id x = x the same, we come to the perspective where we can say "there's only one function with forall a. a -> a".

How do we know the output `a` has the same value as input `a` in the type signature (a,b) -> a [duplicate]

New to Haskell so sorry if this is very basic
This example is taken from "Real World Haskell" -
ghci> :type fst
fst :: (a, b) -> a
They show the type of the fst function and then follow it with this paragraph...
"The result type of fst is a. We've already mentioned that parametric polymorphism makes the real type inaccessible: fst doesn't have enough information to construct a value of type a, nor can it turn an a into a b. So the only possible valid behaviour (omitting infinite loops or crashes) it can have is to return the first element of the pair."
I feel like I am missing the fundamental point of the paragraph, and perhaps something important about Haskell. Why couldn't the fst function return type b? Why couldn't it take the tuple as a param, but simply return an Int ( or any other type that is NOT a)? I don't understand why it MUST return type a?
Thanks
If it did any of those things, its type would change. What the quote is saying is that, given that we know fst is of type (a, b) -> a, we can make those deductions about it. If it had a different type, we would not be able to do so.
For instance, see that
snd :: (a, b) -> a
snd (x, y) = y
does not type-check, and so we know a value of type (a, b) -> a cannot behave like snd.
Parametricity is basically the fact that a polymorphic function of a certain type must obey certain laws by construction — i.e., there is no well-typed expression of that type that does not obey them. So, for it to be possible to prove things about fst with it, we must first know fst's type.
Note especially the word polymorphism there: we can't do the same kind of deductions about non-polymorphic types. For instance,
myFst :: (Int, String) -> Int
myFst (a, b) = a
type-checks, but so does
myFst :: (Int, String) -> Int
myFst (a, b) = 42
and even
myFst :: (Int, String) -> Int
myFst (a, b) = length b
Parametricity relies crucially on the fact that a polymorphic function can't "look" at the types it is called with. So the only value of type a that fst knows about is the one it's given: the first element of the tuple.
The point is that once you have that type, the implementation options are greatly limited. If you returned an Int, then your type would be (a,b) -> Int. Since a could be anything, we can't gin one up out of thin air in the implementation without resorting to undefined, and so must return the one given to us by the caller.
You should read the Theorems for Free article.
Let's try to add some more hand-waving to that already given by Real World Haskell. Lets try to convince ourselves that given that we have a function fst with type (a,b) -> a the only total function it can be is the following one:
fst (x,y) = x
First of all, we cannot return anything other then a value of type a, that is in the premise that fst has type (a,b) -> a, so we cannot have fst (x,y) = y or fst (x,y) = 1 because that does not have the correct type.
Now, as RWH says, if I give fst an (Int,Int), fst doesn't know these are Ints, furthermore, a or b are not required to belong to any type class so fst has no available values or functions associated with a or b.
So fst only knows about the a value and the b value that I give it and I can't turn b into an a (It can't make a b -> a function) so it must return the given a value.
This isn't actually just magical hand waving, one can actually deduce what possible expressions there are of a given polymorphic type. There is actually a program called djinn that does exactly that.
The point here is that both a and b are type variables (that might be the same, but that's not needed). Anyway, since for a given tuple of two elements, fst returns always the first element, the returned type must be always the same as the type for the first element.
The fundamental thing you're probably missing is this:
In most programming languages, if you say "this function returns any type", it means that the function can decide what type of value it actually returns.
In Haskell, if you say "this function returns any type", it means that the caller gets to decide what type that should be. (!)
So if I you write foo :: Int -> x, it can't just return a String, because I might not ask it for a String. I might ask for a Customer, or a ThreadId, or anything.
Obviously, there's no way that foo can know how to create a value of every possible type, even types that don't exist yet. In short, it is impossible to write foo. Everything you try will give you type errors, and won't compile.
(Caveat: There is a way to do it. foo could loop forever, or throw an exception. But it cannot return a valid value.)
There's no way for a function to be able to create values of any possible type. But it's perfectly possible for a function to move data around without caring what type it is. Therefore, if you see a function that accepts any kind of data, the only thing it can be doing with it is to move it around.
Alternatively, if the type has to belong to a specific class, then the function can use the methods of that class on it. (Could. It doesn't have to, but it can if it wants to.)
Fundamentally, this is why you can actually tell what a function does just by looking at its type signature. The type signature tells you what the function "knows" about the data it's going to be given, and hence what possible operations it might perform. This is why searching for a Haskell function by its type signature is so damned useful.
You've heard the expression "in Haskell, if it compiles, it usually works right"? Well, this is why. ;-)

How does return statement work in Haskell? [duplicate]

This question already has answers here:
What's so special about 'return' keyword
(3 answers)
Closed 5 years ago.
Consider these functions
f1 :: Maybe Int
f1 = return 1
f2 :: [Int]
f2 = return 1
Both have the same statement return 1. But the results are different. f1 gives value Just 1 and f2 gives value [1]
Looks like Haskell invokes two different versions of return based on return type. I like to know more about this kind of function invocation. Is there a name for this feature in programming languages?
This is a long meandering answer!
As you've probably seen from the comments and Thomas's excellent (but very technical) answer You've asked a very hard question. Well done!
Rather than try to explain the technical answer I've tried to give you a broad overview of what Haskell does behind the scenes without diving into technical detail. Hopefully it will help you to get a big picture view of what's going on.
return is an example of type inference.
Most modern languages have some notion of polymorphism. For example var x = 1 + 1 will set x equal to 2. In a statically typed language 2 will usually be an int. If you say var y = 1.0 + 1.0 then y will be a float. The operator + (which is just a function with a special syntax)
Most imperative languages, especially object oriented languages, can only do type inference one way. Every variable has a fixed type. When you call a function it looks at the types of the argument and chooses a version of that function that fits the types (or complains if it can't find one).
When you assign the result of a function to a variable the variable already has a type and if it doesn't agree with the type of the return value you get an error.
So in an imperative language the "flow" of type deduction follows time in your program Deduce the type of a variable, do something with it and deduce the type of the result. In a dynamically typed language (such as Python or javascript) the type of a variable is not assigned until the value of the variable is computed (which is why there don't seem to be types). In a statically typed language the types are worked out ahead of time (by the compiler) but the logic is the same. The compiler works out what the types of variables are going to be, but it does so by following the logic of the program in the same way as the program runs.
In Haskell the type inference also follows the logic of the program. Being Haskell it does so in a very mathematically pure way (called System F). The language of types (that is the rules by which types are deduced) are similar to Haskell itself.
Now remember Haskell is a lazy language. It doesn't work out the value of anything until it needs it. That's why it makes sense in Haskell to have infinite data structures. It never occurs to Haskell that a data structure is infinite because it doesn't bother to work it out until it needs to.
Now all that lazy magic happens at the type level too. In the same way that Haskell doesn't work out what the value of an expression is until it really needs to, Haskell doesn't work out what the type of an expression is until it really needs to.
Consider this function
func (x : y : rest) = (x,y) : func rest
func _ = []
If you ask Haskell for the type of this function it has a look at the definition, sees [] and : and deduces that it's working with lists. But it never needs to look at the types of x and y, it just knows that they have to be the same because they end up in the same list. So it deduces the type of the function as [a] -> [a] where a is a type that it hasn't bothered to work out yet.
So far no magic. But it's useful to notice the difference between this idea and how it would be done in an OO language. Haskell doesn't convert the arguments to Object, do it's thing and then convert back. Haskell just hasn't been asked explicitly what the type of the list is. So it doesn't care.
Now try typing the following into ghci
maxBound - length ""
maxBound : "Hello"
Now what just happened !? minBound bust be a Char because I put it on the front of a string and it must be an integer because I added it to 0 and got a number. Plus the two values are clearly very different.
So what is the type of minBound? Let's ask ghci!
:type minBound
minBound :: Bounded a => a
AAargh! what does that mean? Basically it means that it hasn't bothered to work out exactly what a is, but is has to be Bounded if you type :info Bounded you get three useful lines
class Bounded a where
minBound :: a
maxBound :: a
and a lot of less useful lines
So if a is Bounded there are values minBound and maxBound of type a.
In fact under the hood Bounded is just a value, it's "type" is a record with fields minBound and maxBound. Because it's a value Haskell doesn't look at it until it really needs to.
So I appear to have meandered somewhere in the region of the answer to your question. Before we move onto return (which you may have noticed from the comments is a wonderfully complex beast.) let's look at read.
ghci again
read "42" + 7
read "'H'" : "ello"
length (read "[1,2,3]")
and hopefully you won't be too surprised to find that there are definitions
read :: Read a => String -> a
class Read where
read :: String -> a
so Read a is just a record containing a single value which is a function String -> a. Its very tempting to assume that there is one read function which looks at a string, works out what type is contained in the string and returns that type. But it does the opposite. It completely ignores the string until it's needed. When the value is needed, Haskell first works out what type it's expecting, once it's done that it goes and gets the appropriate version of the read function and combines it with the string.
now consider something slightly more complex
readList :: Read a => [String] -> a
readList strs = map read strs
under the hood readList actually takes two arguments
readList' (Read a) -> [String] -> [a]
readList' {read = f} strs = map f strs
Again as Haskell is lazy it only bothers looking at the arguments when it's needs to find out the return value, at that point it knows what a is, so the compiler can go and fine the right version of Read. Until then it doesn't care.
Hopefully that's given you a bit of an idea of what's happening and why Haskell can "overload" on the return type. But it's important to remember it's not overloading in the conventional sense. Every function has only one definition. It's just that one of the arguments is a bag of functions. read_str doesn't ever know what types it's dealing with. It just knows it gets a function String -> a and some Strings, to do the application it just passes the arguments to map. map in turn doesn't even know it gets strings. When you get deeper into Haskell it becomes very important that functions don't know very much about the types they're dealing with.
Now let's look at return.
Remember how I said that the type system in Haskell was very similar to Haskell itself. Remember that in Haskell functions are just ordinary values.
Does this mean I can have a type that takes a type as an argument and returns another type? Of course it does!
You've seen some type functions Maybe takes a type a and returns another type which can either be Just a or Nothing. [] takes a type a and returns a list of as. Type functions in Haskell are usually containers. For example I could define a type function BinaryTree which stores a load of a's in a tree like structure. There are of course lots of much stranger ones.
So, if these type functions are similar to ordinary types I can have a typeclass that contains type functions. One such typeclass is Monad
class Monad m where
return a -> m a
(>>=) m a (a -> m b) -> m b
so here m is some type function. If I want to define Monad for m I need to define return and the scary looking operator below it (which is called bind)
As others have pointed out the return is a really misleading name for a fairly boring function. The team that designed Haskell have since realised their mistake and they're genuinely sorry about it. return is just an ordinary function that takes an argument and returns a Monad with that type in it. (You never asked what a Monad actually is so I'm not going to tell you)
Let's define Monad for m = Maybe!
First I need to define return. What should return x be? Remember I'm only allowed to define the function once, so I can't look at x because I don't know what type it is. I could always return Nothing, but that seems a waste of a perfectly good function. Let's define return x = Just x because that's literally the only other thing I can do.
What about the scary bind thing? what can we say about x >>= f? well x is a Maybe a of some unknown type a and f is a function that takes an a and returns a Maybe b. Somehow I need to combine these to get a Maybe b`
So I need to define Nothing >== f. I can't call f because it needs an argument of type a and I don't have a value of type a I don't even know what 'a' is. I've only got one choice which is to define
Nothing >== f = Nothing
What about Just x >>= f? Well I know x is of type a and f takes a as an argument, so I can set y = f a and deduce that y is of type b. Now I need to make a Maybe b and I've got a b so ...
Just x >>= f = Just (f x)
So I've got a Monad! what if m is List? well I can follow a similar sort of logic and define
return x = [x]
[] >>= f = []
(x : xs) >>= a = f x ++ (xs >>= f)
Hooray another Monad! It's a nice exercise to go through the steps and convince yourself that there's no other sensible way of defining this.
So what happens when I call return 1?
Nothing!
Haskell's Lazy remember. The thunk return 1 (technical term) just sits there until someone needs the value. As soon as Haskell needs the value it know what type the value should be. In particular it can deduce that m is List. Now that it knows that Haskell can find the instance of Monad for List. As soon as it does that it has access to the correct version of return.
So finally Haskell is ready To call return, which in this case returns [1]!
The return function is from the Monad class:
class Applicative m => Monad (m :: * -> *) where
...
return :: a -> m a
So return takes any value of type a and results in a value of type m a. The monad, m, as you've observed is polymorphic using the Haskell type class Monad for ad hoc polymorphism.
At this point you probably realize return is not an good, intuitive, name. It's not even a built in function or a statement like in many other languages. In fact a better-named and identically-operating function exists - pure. In almost all cases return = pure.
That is, the function return is the same as the function pure (from the Applicative class) - I often think to myself "this monadic value is purely the underlying a" and I try to use pure instead of return if there isn't already a convention in the codebase.
You can use return (or pure) for any type that is a class of Monad. This includes the Maybe monad to get a value of type Maybe a:
instance Monad Maybe where
...
return = pure -- which is from Applicative
...
instance Applicative Maybe where
pure = Just
Or for the list monad to get a value of [a]:
instance Applicative [] where
{-# INLINE pure #-}
pure x = [x]
Or, as a more complex example, Aeson's parse monad to get a value of type Parser a:
instance Applicative Parser where
pure a = Parser $ \_path _kf ks -> ks a

How do we overcome the compile time and runtime gap when programming in a Dependently Typed Language?

I'm told that in dependent type system, "types" and "values" is mixed, and we can treat both of them as "terms" instead.
But there is something I can't understand: in a strongly typed programming language without Dependent Type (like Haskell), Types is decided (infered or checked) at compile time, but values is decided (computed or inputed) at runtime.
I think there must be a gap between these two stages. Just think that if a value is interactively read from STDIN, how can we reference this value in a type which must be decided AOT?
e.g. There is a natural number n and a list of natural number xs (which contains n elements) which I need to read from STDIN, how can I put them into a data structure Vect n Nat?
Suppose we input n :: Int at runtime from STDIN. We then read n strings, and store them into vn :: Vect n String (pretend for the moment this can be done).
Similarly, we can read m :: Int and vm :: Vect m String. Finally, we concatenate the two vectors: vn ++ vm (simplifying a bit here). This can be type checked, and will have type Vect (n+m) String.
Now it is true that the type checker runs at compile time, before the values n,m are known, and also before vn,vm are known. But this does not matter: we can still reason symbolically on the unknowns n,m and argue that vn ++ vm has that type, involving n+m, even if we do not yet know what n+m actually is.
It is not that different from doing math, where we manipulate symbolic expressions involving unknown variables according to some rules, even if we do not know the values of the variables. We don't need to know what number is n to see that n+n = 2*n.
Similarly, the type checker can type check
-- pseudocode
readNStrings :: (n :: Int) -> IO (Vect n String)
readNStrings O = return Vect.empty
readNStrings (S p) = do
s <- getLine
vp <- readNStrings p
return (Vect.cons s vp)
(Well, actually some more help from the programmer could be needed to typecheck this, since it involves dependent matching and recursion. But I'll neglect this.)
Importantly, the type checker can check that without knowing what n is.
Note that the same issue actually already arises with polymorphic functions.
fst :: forall a b. (a, b) -> a
fst (x, y) = x
test1 = fst # Int # Float (2, 3.5)
test2 = fst # String # Bool ("hi!", True)
...
One might wonder "how can the typechecker check fst without knowing what types a and b will be at runtime?". Again, by reasoning symbolically.
With type arguments this is arguably more obvious since we usually run the programs after type erasure, unlike value parameters like our n :: Int above, which can not be erased. Still, there is some similarity between universally quantifying over types or over Int.
It seems to me that there are two questions here:
Given that some values are unknown during compile-time (e.g., values read from STDIN), how can we make use of them in types? (Note that chi has already given an excellent answer to this.)
Some operations (e.g., getLine) seem to make absolutely no sense at compile-time; how could we possibly talk about them in types?
The answer to (1), as chi has said, is symbolic or abstract reasoning. You can read in a number n, and then have a procedure that builds a Vect n Nat by reading from the command line n times, making use of arithmetic properties such as the fact that 1+(n-1) = n for nonzero natural numbers.
The answer to (2) is a bit more subtle. Naively, you might want to say "this function returns a vector of length n, where n is read from the command line". There are two types you might try to give this (apologies if I'm getting Haskell notation wrong)
unsafePerformIO (do n <- getLine; return (IO (Vect (read n :: Int) Nat)))
or (in pseudo-Coq notation, since I'm not sure what Haskell's notation for existential types is)
IO (exists n, Vect n Nat)
These two types can actually both be made sense of, and say different things. The first type, to me, says "at compile time, read n from the command line, and return a function which, at runtime, gives a vector of length n by performing IO". The second type says "at runtime, perform IO to get a natural number n and a vector of length n".
The way I like looking at this is that all side effects (other than, perhaps, non-termination) are monad transformers, and there is only one monad: the "real-world" monad. Monad transformers work just as well at the type level as at the term level; the one thing which is special is run :: M a -> a which executes the monad (or stack of monad transformers) in the "real world". There are two points in time at which you can invoke run: one is at compile time, where you invoke any instance of run which shows up at the type level. Another is at runtime, where you invoke any instance of run which shows up at the value level. Note that run only makes sense if you specify an evaluation order; if your language does not specify whether it is call-by-value or call-by-name (or call-by-push-value or call-by-need or call-by-something-else), you can get incoherencies when you try to compute a type.

Functions don't just have types: They ARE Types. And Kinds. And Sorts. Help put a blown mind back together

I was doing my usual "Read a chapter of LYAH before bed" routine, feeling like my brain was expanding with every code sample. At this point I was convinced that I understood the core awesomeness of Haskell, and now just had to understand the standard libraries and type classes so that I could start writing real software.
So I was reading the chapter about applicative functors when all of a sudden the book claimed that functions don't merely have types, they are types, and can be treated as such (For example, by making them instances of type classes). (->) is a type constructor like any other.
My mind was blown yet again, and I immediately jumped out of bed, booted up the computer, went to GHCi and discovered the following:
Prelude> :k (->)
(->) :: ?? -> ? -> *
What on earth does it mean?
If (->) is a type constructor, what are the value constructors? I can take a guess, but would have no idea how define it in traditional data (->) ... = ... | ... | ... format. It's easy enough to do this with any other type constructor: data Either a b = Left a | Right b. I suspect my inability to express it in this form is related to the extremly weird type signature.
What have I just stumbled upon? Higher kinded types have kind signatures like * -> * -> *. Come to think of it... (->) appears in kind signatures too! Does this mean that not only is it a type constructor, but also a kind constructor? Is this related to the question marks in the type signature?
I have read somewhere (wish I could find it again, Google fails me) about being able to extend type systems arbitrarily by going from Values, to Types of Values, to Kinds of Types, to Sorts of Kinds, to something else of Sorts, to something else of something elses, and so on forever. Is this reflected in the kind signature for (->)? Because I've also run into the notion of the Lambda cube and the calculus of constructions without taking the time to really investigate them, and if I remember correctly it is possible to define functions that take types and return types, take values and return values, take types and return values, and take values which return types.
If I had to take a guess at the type signature for a function which takes a value and returns a type, I would probably express it like this:
a -> ?
or possibly
a -> *
Although I see no fundamental immutable reason why the second example couldn't easily be interpreted as a function from a value of type a to a value of type *, where * is just a type synonym for string or something.
The first example better expresses a function whose type transcends a type signature in my mind: "a function which takes a value of type a and returns something which cannot be expressed as a type."
You touch so many interesting points in your question, so I am
afraid this is going to be a long answer :)
Kind of (->)
The kind of (->) is * -> * -> *, if we disregard the boxity GHC
inserts. But there is no circularity going on, the ->s in the
kind of (->) are kind arrows, not function arrows. Indeed, to
distinguish them kind arrows could be written as (=>), and then
the kind of (->) is * => * => *.
We can regard (->) as a type constructor, or maybe rather a type
operator. Similarly, (=>) could be seen as a kind operator, and
as you suggest in your question we need to go one 'level' up. We
return to this later in the section Beyond Kinds, but first:
How the situation looks in a dependently typed language
You ask how the type signature would look for a function that takes a
value and returns a type. This is impossible to do in Haskell:
functions cannot return types! You can simulate this behaviour using
type classes and type families, but let us for illustration change
language to the dependently typed language
Agda. This is a
language with similar syntax as Haskell where juggling types together
with values is second nature.
To have something to work with, we define a data type of natural
numbers, for convenience in unary representation as in
Peano Arithmetic.
Data types are written in
GADT style:
data Nat : Set where
Zero : Nat
Succ : Nat -> Nat
Set is equivalent to * in Haskell, the "type" of all (small) types,
such as Natural numbers. This tells us that the type of Nat is
Set, whereas in Haskell, Nat would not have a type, it would have
a kind, namely *. In Agda there are no kinds, but everything has
a type.
We can now write a function that takes a value and returns a type.
Below is a the function which takes a natural number n and a type,
and makes iterates the List constructor n applied to this
type. (In Agda, [a] is usually written List a)
listOfLists : Nat -> Set -> Set
listOfLists Zero a = a
listOfLists (Succ n) a = List (listOfLists n a)
Some examples:
listOfLists Zero Bool = Bool
listOfLists (Succ Zero) Bool = List Bool
listOfLists (Succ (Succ Zero)) Bool = List (List Bool)
We can now make a map function that operates on listsOfLists.
We need to take a natural number that is the number of iterations
of the list constructor. The base cases are when the number is
Zero, then listOfList is just the identity and we apply the function.
The other is the empty list, and the empty list is returned.
The step case is a bit move involving: we apply mapN to the head
of the list, but this has one layer less of nesting, and mapN
to the rest of the list.
mapN : {a b : Set} -> (a -> b) -> (n : Nat) ->
listOfLists n a -> listOfLists n b
mapN f Zero x = f x
mapN f (Succ n) [] = []
mapN f (Succ n) (x :: xs) = mapN f n x :: mapN f (Succ n) xs
In the type of mapN, the Nat argument is named n, so the rest of
the type can depend on it. So this is an example of a type that
depends on a value.
As a side note, there are also two other named variables here,
namely the first arguments, a and b, of type Set. Type
variables are implicitly universally quantified in Haskell, but
here we need to spell them out, and specify their type, namely
Set. The brackets are there to make them invisible in the
definition, as they are always inferable from the other arguments.
Set is abstract
You ask what the constructors of (->) are. One thing to point out
is that Set (as well as * in Haskell) is abstract: you cannot
pattern match on it. So this is illegal Agda:
cheating : Set -> Bool
cheating Nat = True
cheating _ = False
Again, you can simulate pattern matching on types constructors in
Haskell using type families, one canoical example is given on
Brent Yorgey's blog.
Can we define -> in the Agda? Since we can return types from
functions, we can define an own version of -> as follows:
_=>_ : Set -> Set -> Set
a => b = a -> b
(infix operators are written _=>_ rather than (=>)) This
definition has very little content, and is very similar to doing a
type synonym in Haskell:
type Fun a b = a -> b
Beyond kinds: Turtles all the way down
As promised above, everything in Agda has a type, but then
the type of _=>_ must have a type! This touches your point
about sorts, which is, so to speak, one layer above Set (the kinds).
In Agda this is called Set1:
FunType : Set1
FunType = Set -> Set -> Set
And in fact, there is a whole hierarchy of them! Set is the type of
"small" types: data types in haskell. But then we have Set1,
Set2, Set3, and so on. Set1 is the type of types which mentions
Set. This hierarchy is to avoid inconsistencies such as Girard's
paradox.
As noticed in your question, -> is used for types and kinds in
Haskell, and the same notation is used for function space at all
levels in Agda. This must be regarded as a built in type operator,
and the constructors are lambda abstraction (or function
definitions). This hierarchy of types is similar to the setting in
System F omega, and more
information can be found in the later chapters of
Pierce's Types and Programming Languages.
Pure type systems
In Agda, types can depend on values, and functions can return types,
as illustrated above, and we also had an hierarchy of
types. Systematic investigation of different systems of the lambda
calculi is investigated in more detail in Pure Type Systems. A good
reference is
Lambda Calculi with Types by Barendregt,
where PTS are introduced on page 96, and many examples on page 99 and onwards.
You can also read more about the lambda cube there.
Firstly, the ?? -> ? -> * kind is a GHC-specific extension. The ? and ?? are just there to deal with unboxed types, which behave differently from just * (which has to be boxed, as far as I know). So ?? can be any normal type or an unboxed type (e.g. Int#); ? can be either of those or an unboxed tuple. There is more information here: Haskell Weird Kinds: Kind of (->) is ?? -> ? -> *
I think a function can't return an unboxed type because functions are lazy. Since a lazy value is either a value or a thunk, it has to be boxed. Boxed just means it is a pointer rather than just a value: it's like Integer() vs int in Java.
Since you are probably not going to be using unboxed types in LYAH-level code, you can imagine that the kind of -> is just * -> * -> *.
Since the ? and ?? are basically just more general version of *, they do not have anything to do with sorts or anything like that.
However, since -> is just a type constructor, you can actually partially apply it; for example, (->) e is an instance of Functor and Monad. Figuring out how to write these instances is a good mind-stretching exercise.
As far as value constructors go, they would have to just be lambdas (\ x ->) or function declarations. Since functions are so fundamental to the language, they get their own syntax.

Resources