Related
If I do the following
functionS (x,y) = y
:t functionS
functionS :: (a, b) -> b
Now with this function:
functionC x y = if (x > y) then True else False
:t function
I would expect to get:
functionC :: (Ord a, Ord b) => a -> b -> Bool
But I get:
functionC :: Ord a => a -> a -> Bool
GHCI seems to be ok with the 2 previous results, but why does it give me the second? Why the type variable a AND b aren't defined?
I think you might be misreading type signatures. Through no fault of your own––the examples you using to inform your thinking are kind of confusing. In particular, in your tuple example
functionS :: (a,b) -> b
functionS (x,y) = y
The notation (_,_) means two different things. In the first line, (a,b) refers to a type, the type of pairs whose first element has type a and second has type b. In the second line, (x,y) refers to a specfiic pair, where x has type a and y has type b. While this "pun" provides a useful mnemonic, it can be confusing as you are first getting the hang of it. I would rather that the type of pairs be a regular type constructor:
functionS :: Pair a b -> b
functionS (x,y) = y
So, moving on to your question. In the signature you are given
functionC :: Ord a => a -> a -> Bool
a is a type. Ord a says that elements of the type a are orderable with respect to each other. The function takes two arguments of the same type. Some types that are orderable are Integer (numerically), String (lexicographically), and a bunch of others. That means that you can tell which of two Integers is the smaller, or which of two Strings are the smaller. However we don't necessarily know how to tell whether an Integer is smaller than a String (and this is good! Have you seen what kinds of shenanigans javascript has to do to support untyped equality? Haskell doesn't have to solve this problem at all!). So that's what this signature is saying –– there is only one single orderable type, a, and the function takes two elements of this same type.
You might still be wondering why functionS's signature has two different type variables. It's because there is no constraint confining them to be the same, such as having to order them against each other. functionS works equally well with a pair where both components are integers as when one is an integer and the other is a string. It doesn't matter. And Haskell always picks the most general type that works. So if they are not forced to be the same, they will be different.
There are more technical ways to explain all this, but I felt an intuitive explanation was in order. I hope it's helpful!
I've just read this paper ("Type classes: an exploration of the design space" by Peyton Jones & Jones), which explains some challenges with the early typeclass system of Haskell, and how to improve it.
Many of the issues that they raise are related to context reduction which is a way to reduce the set of constraints over instance and function declarations by following the "reverse entailment" relationship.
e.g. if you have somewhere instance (Ord a, Ord b) => Ord (a, b) ... then within contexts, Ord (a, b) gets reduced to {Ord a, Ord b} (reduction does not always shrink the number of constrains).
I did not understand from the paper why this reduction was necessary.
Well, I gathered it was used to perform some form of type checking. When you have your reduced set of constraint, you can check that there exist some instance that can satisfy them, otherwise it's an error. I'm not too sure what the added value of that is, since you would notice the problem at the use site, but okay.
But even if you have to do that check, why use the result of reduction inside inferred types? The paper points out it leads to unintuitive inferred types.
The paper is quite ancient (1997) but as far as I can tell, context reduction is still an ongoing concern. The Haskell 2010 spec does mention the inference behaviour I explain above (link).
So, why do it this way?
I don't know if this is The Reason, necessarily, but it might be considered A Reason: in early Haskell, type signatures were only permitted to have "simple" constraints, namely, a type class name applied to a type variable. Thus, for example, all of these were okay:
Ord a => a -> a -> Bool
Eq a => a -> a -> Bool
Graph gr => gr n e -> [n]
But none of these:
Ord (Tree a) => Tree a -> Tree a -> Bool
Eq (a -> b) => (a -> b) -> (a -> b) -> Bool
Graph Gr => Gr n e -> [n]
I think there was a feeling then -- and still today, as well -- that allowing the compiler to infer a type which one couldn't write manually would be a bit unfortunate. Context reduction was a way of turning the above signatures either into ones that could be written by hand as well or an informative error. For example, since one might reasonably have
instance Ord a => Ord (Tree a)
in scope, we could turn the illegal signature Ord (Tree a) => ... into the legal signature Ord a => .... On the other hand, if we don't have any instance of Eq for functions in scope, one would report an error about the type which was inferred to require Eq (a -> b) in its context.
This has a couple of other benefits:
Intuitively pleasing. Many of the context reduction rules do not change whether the type is legal, but do reflect things humans would do when writing the type. I'm thinking here of the de-duplication and subsumption rules that let you turn, e.g. (Eq a, Eq a, Ord a) into just Ord a -- a transformation one definitely would want to do for readability.
This can frequently catch stupid errors; rather than inferring a type like Eq (Integer -> Integer) => Bool which can't be satisfied in a law-abiding way, one can report an error like Perhaps you did not apply a function to enough arguments?. Much friendlier!
It becomes the compiler's job to pinpoint what went wrong. Instead of inferring a complicated context like Eq (Tree (Grizwump a, [Flagle (Gr n e) (Gr n' e') c])) and complaining that the context is not satisfiable, it instead is forced to reduce this to the constituent constraints; it will instead complain that we couldn't determine Eq (Grizwump a) from the existing context -- a much more precise and actionable error.
I think this is indeed desirable in a dictionary passing implementation. In such an implementation, a "dictionary", that is, a tuple or record of functions is passed as implicit argument for every type class constraint in the type of the applied function.
Now, the question is simply when and how those dictionaries are created. Observe that for simple types like Int by necessity all dictionaries for whatever type class Int is an instance of will be a constant.
Not so in the case of parameterized types like lists, Maybe or tuples. It is clear that to show a tuple, for instance, the Show instances of the actual tuple elements need to be known. Hence such a polymorphic dictionary cannot be a constant.
It appears that the principle guiding the dictionary passing is such that only dictionaries for types that appear as type variables in the type of the applied function are passed. Or, to put it differently: no redundant information is replicated.
Consider this function:
f :: (Show a, Show b) => (a,b) -> Int
f ab = length (show ab)
The information that a tuple of show-able components is also showable, thus a constraint like Show (a,b) needs not to appear when we already know (Show a, Show b).
An alternative implementation would be possible, though, where the caller .would be responsible to create and pass dictionaries. This could work without context reduction, such that the type of f would look like:
f :: Show (a,b) => (a,b) -> Int
But this would mean that the code to create the tuple dictionary would have to be repeated on every call site. And it is easy to come up with examples where the number of necessary constraints actually increases, like in:
g :: (Show (a,a), Show(b,b), Show (a,b), Show (b, a)) => a -> b -> Int
g a b = maximum (map length [show (a,a), show (a,b), show (b,a), show(b,b)])
It is instructive to implement a type class/instance system with actual records that are explicitly passed. For example:
data Show' a = Show' { show' :: a -> String }
showInt :: Show' Int
showInt = Show' { show' = intshow } where
intshow :: Int -> String
intshow = show
Once you do this you will probably easily recognize the need for "context reduction".
I've lately been working on an API in Elm where one of the main types is contravariant. So, I've googled around to see what one can do with contravariant types and found that the Contravariant package in Haskell defines the Divisible type class.
It is defined as follows:
class Contravariant f => Divisible f where
divide :: (a -> (b, c)) -> f b -> f c -> f a
conquer :: f a
It turns out that my particular type does suit the definition of the Divisible type class. While Elm does not support type classes, I do look at Haskell from time to time for some inspiration.
My question: Are there any practical uses for this type class? Are there known APIs out there in Haskell (or other languages) that benefit from this divide-conquer pattern? Are there any gotchas I should be aware of?
Thank you very much for your help.
One example:
Applicative is useful for parsing, because you can turn Applicative parsers of parts into a parser of wholes, needing only a pure function for combining the parts into a whole.
Divisible is useful for serializing (should we call this coparsing now?), because you can turn Divisible serializers of parts into a serializer of wholes, needing only a pure function for splitting the whole into parts.
I haven't actually seen a project that worked this way, but I'm (slowly) working on an Avro implementation for Haskell that does.
When I first came across Divisible I wanted it for divide, and had no idea what possible use conquer could be other than cheating (an f a out of nowhere, for any a?). But to make the Divisible laws check out for my serializers conquer became a "serializer" that encodes anything to zero bytes, which makes a lot of sense.
Here's a possible use case.
In streaming libraries, one can have fold-like constructs like the ones from the foldl package, that are fed a sequence of inputs and return a summary value when the sequence is exhausted.
These folds are contravariant on their inputs, and can be made Divisible. This means that if you have a stream of elements where each element can be somehow decomposed into b and c parts, and you also happen to have a fold that consumes bs and another fold that consumes cs, then you can build a fold that consumes the original stream.
The actual folds from foldl don't implement Divisible, but they could, using a newtype wrapper. In my process-streaming package I have a fold-like type that does implement Divisible.
divide requires the return values of the constituent folds to be of the same type, and that type must be an instance of Monoid. If the folds return different, unrelated monoids, a workaround is to put each return value in a separate field of a tuple, leaving the other field as mempty. This works because a tuple of monoids is itself a Monoid.
I'll examine the example of the core data types in Fritz Henglein's generalized radix sort techniques as implemented by Edward Kmett in the discrimination package.
While there's a great deal going on there, it largely focuses around a type like this
data Group a = Group (forall b . [(a, b)] -> [[b]])
If you have a value of type Group a you essentially must have an equivalence relationship on a because if I give you an association between as and some type b completely unknown to you then you can give me "groupings" of b.
groupId :: Group a -> [a] -> [[a]]
groupId (Group grouper) = grouper . map (\a -> (a, a))
You can see this as a core type for writing a utility library of groupings. For instance, we might want to know that if we can Group a and Group b then we can Group (a, b) (more on this in a second). Henglein's core idea is that if you can start with some basic Groups on integers—we can write very fast Group Int32 implementations via radix sort—and then use combinators to extend them over all types then you will have generalized radix sort to algebraic data types.
So how might we build our combinator library?
Well, f :: Group a -> Group b -> Group (a, b) is pretty important in that it lets us make groups of product-like types. Normally, we'd get this from Applicative and liftA2 but Group, you'll notice, is Contravaiant, not a Functor.
So instead we use Divisible
divided :: Group a -> Group b -> Group (a, b)
Notice that this arises in a strange way from
divide :: (a -> (b, c)) -> Group b -> Group c -> Group a
as it has the typical "reversed arrow" character of contravariant things. We can now understand things like divide and conquer in terms of their interpretation on Group.
Divide says that if I want to build a strategy for equating as using strategies for equating bs and cs, I can do the following for any type x
Take your partial relation [(a, x)] and map over it with a function f :: a -> (b, c), and a little tuple manipulation, to get a new relation [(b, (c, x))].
Use my Group b to discriminate [(b, (c, x))] into [[(c, x)]]
Use my Group c to discriminate each [(c, x)] into [[x]] giving me [[[x]]]
Flatten the inner layers to get [[x]] like we need
instance Divisible Group where
conquer = Group $ return . fmap snd
divide k (Group l) (Group r) = Group $ \xs ->
-- a bit more cleverly done here...
l [ (b, (c, d)) | (a,d) <- xs, let (b, c) = k a] >>= r
We also get interpretations of the more tricky Decidable refinement of Divisible
class Divisible f => Decidable f where
lose :: (a -> Void) -> f a
choose :: (a -> Either b c) -> f b -> f c -> f a
instance Decidable Group where
lose :: (a -> Void) -> Group a
choose :: (a -> Either b c) -> Group b -> Group c -> Group a
These read as saying that for any type a of which we can guarantee there are no values (we cannot produce values of Void by any means, a function a -> Void is a means of producing Void given a, thus we must not be able to produce values of a by any means either!) then we immediately get a grouping of zero values
lose _ = Group (\_ -> [])
We also can go a similar game as to divide above except instead of sequencing our use of the input discriminators, we alternate.
Using these techniques we build up a library of "Groupable" things, namely Grouping
class Grouping a where
grouping :: Group a
and note that nearly all the definitions arise from the basic definition atop groupingNat which uses fast monadic vector manipuations to achieve an efficient radix sort.
Reading http://www.seas.upenn.edu/~cis194/spring13/lectures/04-higher-order.html it states
In particular, note that function arrows associate to the right, that
is, W -> X -> Y -> Z is equivalent to W -> (X -> (Y -> Z)). We can
always add or remove parentheses around the rightmost top-level arrow
in a type.
Function arrows associate to the right but as function application associates to the left then what is usefulness of this information ? I feel I'm not understanding something as to me it is a meaningless point that function arrows associate to the right. As function application always associates to the left then this the only associativity I should be concerned with ?
Function arrows associate to the right but [...] what is usefulness of this information?
If you see a type signature like, for example, f : String -> Int -> Bool you need to know the associativity of the function arrow to understand what the type of f really is:
if the arrow associates to the left, then the type means (String -> Int) -> Bool, that is, f takes a function as argument and returns a boolean.
if the arrow associates to the right, then the type means String -> (Int -> Bool), that is, f takes a string as argument and returns a function.
That's a big difference, and if you want to use f, you need to know which one it is. Since the function arrow associates to the right, you know that it has to be the second option: f takes a string and returns a function.
Function arrows associate to the right [...] function application associates to the left
These two choices work well together. For example, we can call the f from above as f "answer" 42 which really means (f "answer") 42. So we are passing the string "answer" to f which returns a function. And then we're passing the number 42 to that function, which returns a boolean. In effect, we're almost using f as a function with two arguments.
This is the standard way of writing functions with two (or more) arguments in Haskell, so it is a very common use case. Because of the associativity of function application and of the function arrow, we can write this common use case without parentheses.
When defining a two-argument curried function, we usually write something like this:
f :: a -> b -> c
f x y = ...
If the arrow did not associate to the right, the above type would instead have to be spelled out as a -> (b -> c). So the usefulness of ->'s associativity is that it saves us from writing too many parentheses when declaring function types.
If an operator # is 'right associative', it means this:
a # b # c # d = a # (b # (c # d))
... for any number of arguments. It behaves like foldr
This means that:
a -> b -> c -> d = a -> (b -> (c -> d))
Note: a -> (b -> (c -> d)) =/= ((a -> b) -> c) -> d ! This is very important.
What this tells us is that, say, foldr:
λ> :t foldr
foldr :: (a -> b -> b) -> b -> [a] -> b
Takes a function of type (a -> b -> b), and then returns... a function that takes a b, and then returns... a function that takes a [a], and then returns... a b. This means that we can apply functions like this
f a b c
because
f a b c = ((f a) b) c
and f will return two functions each time an argument is given.
Essentially, this isn't very useful as such, but is important information for when we want to interpret and call function types.
However, in functions like (++), associativity matters. If (++) were left associative, it would be very slow, so it's right associative.
Early functional language Lisp suffered from excessively nested parenthesis (which make code (or even text (if you do not mind to consider a broader context)) difficult to read. With time functional language designers opted to make functional code easy to read and write for pros even at cost of confusing rookies with less uniform rules.
In functional code,
function type declaration like (String -> Int) -> Bool are much more rare than functions like String -> (Int -> Bool), because functions that return functions are trade mark of functional style. Thus associating arrows to right helps reduce parentheses number (on overage, you might need to map a function to a primitive type). For function applications it is vise-versa.
The main purposes is convenience, because partial function application goes from left to right.
Every time you partially apply a function to a set of values, the remaining type has to be valid.
You can think of arrow types as a queue of types, where the queue itself is a type. During partial function application, you dequeue as many types from the queue as the number of arguments, yielding whatever remains of the queue. The resulting queue is still a valid type.
This is why types associate to the right. If types associate to the left, it will behave like a stack, and you won't be able to partially apply it the same way without leaving "holes" or undefined domains. For instance, say you have the following function:
foo :: a -> b -> c -> d
If Haskell types were left-associative, then passing a single parameter to foo would yield the following invalid type:
((? -> b) -> c) -> d
You will then be forced to circumvent it by adding parentheses, which could hamper readability.
For a silly challenge I am trying to implement a list type using as little of the prelude as possible and without using any custom types (the data keyword).
I can construct an modify a list using tuples like so:
import Prelude (Int(..), Num(..), Eq(..))
cons x = (x, ())
prepend x xs = (x, xs)
head (x, _) = x
tail (_, x) = x
at xs n = if n == 0 then xs else at (tail xs) (n-1)
I cannot think of how to write an at (!!) function. Is this even possible in a static language?
If it is possible could you try to nudge me in the right direction without telling me the answer.
There is a standard trick known as Church encoding that makes this easy. Here's a generic example to get you started:
data Foo = A Int Bool | B String
fooValue1 = A 3 False
fooValue2 = B "hello!"
Now, a function that wants to use this piece of data must know what to do with each of the constructors. So, assuming it wants to produce some result of type r, it must at the very least have two functions, one of type Int -> Bool -> r (to handle the A constructor), and the other of type String -> r (to handle the B constructor). In fact, we could write the type that way instead:
type Foo r = (Int -> Bool -> r) -> (String -> r) -> r
You should read the type Foo r here as saying "a function that consumes a Foo and produces an r". The type itself "stores" a Foo inside a closure -- so that it will effectively apply one or the other of its arguments to the value it closed over. Using this idea, we can rewrite fooValue1 and fooValue2:
fooValue1 = \consumeA consumeB -> consumeA 3 False
fooValue2 = \consumeA consumeB -> consumeB "hello!"
Now, let's try applying this trick to real lists (though not using Haskell's fancy syntax sugar).
data List a = Nil | Cons a (List a)
Following the same format as before, consuming a list like this involves either giving a value of type r (in case the constructor was Nil) or telling what to do with an a and another List a, so. At first, this seems problematic, since:
type List a r = (r) -> (a -> List a -> r) -> r
isn't really a good type (it's recursive!). But we can instead demand that we first reduce all the recursive arguments to r first... then we can adjust this type to make something more reasonable.
type List a r = (r) -> (a -> r -> r) -> r
(Again, we should read the type List a r as being "a thing that consumes a list of as and produces an r".)
There's one final trick that's necessary. What we would like to do is to enforce the requirement that the r that our List a r returns is actually constructed from the arguments we pass. That's a little abstract, so let's give an example of a bad value that happens to have type List a r, but which we'd like to rule out.
badList = \consumeNil consumeCons -> False
Now, badList has type List a Bool, but it's not really a function that consumes a list and produces a Bool, since in some sense there's no list being consumed. We can rule this out by demanding that the type work for any r, no matter what the user wants r to be:
type List a = forall r. (r) -> (a -> r -> r) -> r
This enforces the idea that the only way to get an r that gets us off the ground is to use the (user-supplied) consumeNil function. Can you see how to make this same refinement for our original Foo type?
If it is possible could you try and nudge me in the right direction without telling me the answer.
It's possible, in more than one way. But your main problem here is that you've not implemented lists. You've implemented fixed-size vectors whose length is encoded in the type.
Compare the types from adding an element to the head of a list vs. your implementation:
(:) :: a -> [a] -> [a]
prepend :: a -> b -> (a, b)
To construct an equivalent of the built-in list type, you'd need a function like prepend with a type resembling a -> b -> b. And if you want your lists to be parameterized by element type in a straightforward way, you need the type to further resemble a -> f a -> f a.
Is this even possible in a static language?
You're also on to something here, in that the encoding you're using works fine in something like Scheme. Languages with "dynamic" systems can be regarded as having a single static type with implicit conversions and metadata attached, which obviously solves the type mismatch problem in a very extreme way!
I cannot think of how to write an at (!!) function.
Recalling that your "lists" actually encode their length in their type, it should be easy to see why it's difficult to write functions that do anything other than increment/decrement the length. You can actually do this, but it requires elaborate encoding and more advanced type system features. A hint in this direction is that you'll need to use type-level numbers as well. You'd probably enjoy doing this as an exercise as well, but it's much more advanced than encoding lists.
Solution A - nested tuples:
Your lists are really nested tuples - for example, they can hold items of different types, and their type reveals their length.
It is possible to write indexing-like function for nested tuples, but it is ugly, and it won't correspond to Prelude's lists. Something like this:
class List a b where ...
instance List () b where ...
instance List a b => List (b,a) b where ...
Solution B - use data
I recommend using data construct. Tuples are internally something like this:
data (,) a b = Pair a b
so you aren't avoiding data. The division between "custom types" and "primitive types" is rather artificial in Haskell, as opposed to C.
Solution C - use newtype:
If you are fine with newtype but not data:
newtype List a = List (Maybe (a, List a))
Solution D - rank-2-types:
Use rank-2-types:
type List a = forall b. b -> (a -> b -> b) -> b
list :: List Int
list = \n c -> c 1 (c 2 n) -- [1,2]
and write functions for them. I think this is closest to your goal. Google for "Church encoding" if you need more hints.
Let's set aside at, and just think about your first four functions for the moment. You haven't given them type signatures, so let's look at those; they'll make things much clearer. The types are
cons :: a -> (a, ())
prepend :: a -> b -> (a, b)
head :: (a, b) -> a
tail :: (a, b) -> b
Hmmm. Compare these to the types of the corresponding Prelude functions1:
return :: a -> [a]
(:) :: a -> [a] -> [a]
head :: [a] -> a
tail :: [a] -> [a]
The big difference is that, in your code, there's nothing that corresponds to the list type, []. What would such a type be? Well, let's compare, function by function.
cons/return: here, (a,()) corresponds to [a]
prepend/(:): here, both b and (a,b) correspond to [a]
head: here, (a,b) corresponds to [a]
tail: here, (a,b) corresponds to [a]
It's clear, then, that what you're trying to say is that a list is a pair. And prepend indicates that you then expect the tail of the list to be another list. So what would that make the list type? You'd want to write type List a = (a,List a) (although this would leave out (), your empty list, but I'll get to that later), but you can't do this—type synonyms can't be recursive. After all, think about what the type of at/!! would be. In the prelude, you have (!!) :: [a] -> Int -> a. Here, you might try at :: (a,b) -> Int -> a, but this won't work; you have no way to convert a b into an a. So you really ought to have at :: (a,(a,b)) -> Int -> a, but of course this won't work either. You'll never be able to work with the structure of the list (neatly), because you'd need an infinite type. Now, you might argue that your type does stop, because () will finish a list. But then you run into a related problem: now, a length-zero list has type (), a length-one list has type (a,()), a length-two list has type (a,(a,())), etc. This is the problem: there is no single "list type" in your implementation, and so at can't have a well-typed first parameter.
You have hit on something, though; consider the definition of lists:
data List a = []
| a : [a]
Here, [] :: [a], and (:) :: a -> [a] -> [a]. In other words, a list is isomorphic to something which is either a singleton value, or a pair of a value and a list:
newtype List' a = List' (Either () (a,List' a))
You were trying to use the same trick without creating a type, but it's this creation of a new type which allows you to get the recursion. And it's exactly your missing recursion which allows lists to have a single type.
1: On a related note, cons should be called something like singleton, and prepend should be cons, but that's not important right now.
You can implement the datatype List a as a pair (f, n) where f :: Nat -> a and n :: Nat, where n is the length of the list:
type List a = (Int -> a, Int)
Implementing the empty list, the list operations cons, head, tail, and null, and a function convert :: List a -> [a] is left as an easy exercise.
(Disclaimer: stole this from Bird's Introduction to Functional Programming in Haskell.)
Of course, you could represent tuples via functions as well. And then True and False and the natural numbers ...