Understanding the Haskell type of this expression - haskell

I was doing a Haskell types exercise and this one has stumped me. The provided expression is:
f2 f g h = h.g.f
And the type for f2 is apparently:
f2 :: (a -> b1) -> (b1 -> b) -> (b -> c) -> a -> c
This seems overly complicated for such a short expression. Can someone explain why this makes sense as the type?

Why do you think it's complicated? It's just a function that takes three functions (of matching types) and returns a new one.
I've renamed b1 to b and updated other names, for consistency. So here is a slightly edited version:
f2 :: (a -> b) -> (b -> c) -> (c -> d) -> a -> d
Here, f2 takes f which has type a -> b. Meaning, it's a function, with some input of some type a (a can be just anything, no restrictions here) and some return value of type b.
Then f2 takes g which input's type must match f's output, so it's b -> c. It cannot be, like, e -> c (where e is something different from b), or the code won't make sense as it won't be possible to compose f . g.
And exactly the same goes for the third argument, h, with type c -> d.
The result of function composition (what f2 returns) is a new function that takes in whatever f does and returns whatever h returns, so it's a -> d. With this we've covered the whole type definition.
Basically, it's a relatively long definition, but I think it's of a very simple nature.

I find these “determine the type of such and such expression” generally a bit backwards. Types should always come first: you want to program a solution to some task, you formulate the problem description as a type signature. Then you go ahead and actually write an implementation.
In this case, you'd start with the problem: I have three functions f, g and h, and a value of type that I can pass to f. Furthermore, the functions have pairwise matching result/argument type. Hence the signature
f2 :: (α -> β) -> (β -> γ) -> (γ -> δ) -> α -> δ
You could now go on and implement this in explicit-pointed form, i.e.
f2 f g h x = h (g (f x))
which is still quite brief. After all, it's quite a simple task!
But in Haskell you can make it even shorter, by using the standard composition operator .. The fact that the final implementation is so extremely short is basically just down to the fact that f2 does essentially the same thing as ., just twice. So this isn't more surprising than if you have a very complex task with a complicated type signature, but discover some library that contains a function which does almost that exact task. Obviously, invoking that ready-build function will give you a much shorter implementation than the task complexity would suggest, but the complexity is merely deferred to the library function.

Related

What does adjoining mean in practical uses?

I often run into term "adjoin" when trying to understand some concepts. Those things are too abstract for me to understand as I'm neither expert in field nor category theory.
The simplest case I found is a Monoid Maybe a instance which often behaves not as I would sometimes expect with regard to Nothing.
From Wikipedia we can learn that by "adjoining" an element to a semigroup we can get a different Monoid instance. I don't understand the sentence but the equations given suggest it's exactly what I need (and is not default for some reason):
Any semigroup S may be turned into a monoid simply by adjoining an element e not in S and defining e • s = s = s • e for all s ∈ S.
Doe "adjoining" mean the same as "adding" at least in this case?
Are there other simple examples of this concept?
What would be simplest possible instance of something that is "left-adjoint"?
Sometimes "adjoining" means "adding something new", as in the semigroup-related sentence you quote. E.g. someone might say that using Maybe a means adding/adjoining a new element Nothing to the type a. Personally, I would only use "adding" for this, though.
This has nothing to do with adjoints in the categorical sense, which are a tricky concept.
Roughly put, assume you have a functional type of the form
F a -> b
where F is some mapping from types to types (more precisely, a functor). Sometimes, you can express an isomorphic type to the above one having the form
a -> G b
where "magically" the function F on the left side moved to the right side, changing into G.
The canonical example is currying: Let e.g.
F T = (T, Int)
G T = Int -> T
Then we have
(F a) -> b
-- definition of F
= (a, Int) -> b
-- currying
=~ a -> (Int -> b)
-- definition of G
= a -> G b
In such case, we write F -| G which is read as "F is left adjoint to G".
Every time you can "nicely move" an operation on the input type on the other side of the arrow, changing it into another operation on the output type, you have an adjoint. (Technically, "nicely" means we have a natural isomorphism)

Wrapping a function's arguments

I am trying to figure out the mechanism to transform a generic function with n arguments into a function where each of the n argument is wrapped inside a context.
A simple example in pseudo-Haskell, where the wrapped type is tuple:
wrap :: (a1 -> a2 -> a3 -> ...) -> Context c -> ((c, a1) -> (c, a2) -> (c, a3) -> ...)
The container which holds the contexts is suggested by the alias Context c, for which I'm yet to figure out a reasonable form. For example it could be an n-tuple, a list of cs or even a composition of type (c -> c -> c -> ...).
It would most likely have to do with some ingenious use of applicative functors or monads, but I currently have no idea where to start attacking the problem. Any lead would be deeply appreciated.
Edit. Resolution
As the comments below suggest, the behaviour can be described as "give me some generic function with n arguments and [some contexts] and attach a context to each argument of the function". This resulting function will be applied applicative-style to n structures S a using some operators which are implemented to make use of these contexts, something like:
(wrap f contexts) §> s1 <§> s2 <§> s3 <§> ....
As an example, I got something running with fixed-size vectors arguments like FSVec c a. So I was wondering whether there is a method to pass a generic function of type fi :: [a1] -> [a2] -> ... and some lengths and "zip" them to get fo :: FSVec c a1 -> FSVec c a2 -> .... And even more, can it be done in a generic way, i.e. apply any arbitrary constructor, not necessarily one which uses type-level (e.g. a tuple constructor as in the example above)?
Alternate solution
Of course, I can always "zip" the contexts on the structures themselves, having to only reimplement (§>) and (<§>) to act accordingly and saving me a lot of trouble:
f §> (c1, s1) <§> (c2, s2) <§> (c3, s3) <§> ...
Still, I would like to "hide" these contexts as to separate concerns. Apart from just being a matter of style, it is actually a good mental exercise to explore the power of applicative functors and applicative-like functions.

Usefulness of "function arrows associate to the right"?

Reading http://www.seas.upenn.edu/~cis194/spring13/lectures/04-higher-order.html it states
In particular, note that function arrows associate to the right, that
is, W -> X -> Y -> Z is equivalent to W -> (X -> (Y -> Z)). We can
always add or remove parentheses around the rightmost top-level arrow
in a type.
Function arrows associate to the right but as function application associates to the left then what is usefulness of this information ? I feel I'm not understanding something as to me it is a meaningless point that function arrows associate to the right. As function application always associates to the left then this the only associativity I should be concerned with ?
Function arrows associate to the right but [...] what is usefulness of this information?
If you see a type signature like, for example, f : String -> Int -> Bool you need to know the associativity of the function arrow to understand what the type of f really is:
if the arrow associates to the left, then the type means (String -> Int) -> Bool, that is, f takes a function as argument and returns a boolean.
if the arrow associates to the right, then the type means String -> (Int -> Bool), that is, f takes a string as argument and returns a function.
That's a big difference, and if you want to use f, you need to know which one it is. Since the function arrow associates to the right, you know that it has to be the second option: f takes a string and returns a function.
Function arrows associate to the right [...] function application associates to the left
These two choices work well together. For example, we can call the f from above as f "answer" 42 which really means (f "answer") 42. So we are passing the string "answer" to f which returns a function. And then we're passing the number 42 to that function, which returns a boolean. In effect, we're almost using f as a function with two arguments.
This is the standard way of writing functions with two (or more) arguments in Haskell, so it is a very common use case. Because of the associativity of function application and of the function arrow, we can write this common use case without parentheses.
When defining a two-argument curried function, we usually write something like this:
f :: a -> b -> c
f x y = ...
If the arrow did not associate to the right, the above type would instead have to be spelled out as a -> (b -> c). So the usefulness of ->'s associativity is that it saves us from writing too many parentheses when declaring function types.
If an operator # is 'right associative', it means this:
a # b # c # d = a # (b # (c # d))
... for any number of arguments. It behaves like foldr
This means that:
a -> b -> c -> d = a -> (b -> (c -> d))
Note: a -> (b -> (c -> d)) =/= ((a -> b) -> c) -> d ! This is very important.
What this tells us is that, say, foldr:
λ> :t foldr
foldr :: (a -> b -> b) -> b -> [a] -> b
Takes a function of type (a -> b -> b), and then returns... a function that takes a b, and then returns... a function that takes a [a], and then returns... a b. This means that we can apply functions like this
f a b c
because
f a b c = ((f a) b) c
and f will return two functions each time an argument is given.
Essentially, this isn't very useful as such, but is important information for when we want to interpret and call function types.
However, in functions like (++), associativity matters. If (++) were left associative, it would be very slow, so it's right associative.
Early functional language Lisp suffered from excessively nested parenthesis (which make code (or even text (if you do not mind to consider a broader context)) difficult to read. With time functional language designers opted to make functional code easy to read and write for pros even at cost of confusing rookies with less uniform rules.
In functional code,
function type declaration like (String -> Int) -> Bool are much more rare than functions like String -> (Int -> Bool), because functions that return functions are trade mark of functional style. Thus associating arrows to right helps reduce parentheses number (on overage, you might need to map a function to a primitive type). For function applications it is vise-versa.
The main purposes is convenience, because partial function application goes from left to right.
Every time you partially apply a function to a set of values, the remaining type has to be valid.
You can think of arrow types as a queue of types, where the queue itself is a type. During partial function application, you dequeue as many types from the queue as the number of arguments, yielding whatever remains of the queue. The resulting queue is still a valid type.
This is why types associate to the right. If types associate to the left, it will behave like a stack, and you won't be able to partially apply it the same way without leaving "holes" or undefined domains. For instance, say you have the following function:
foo :: a -> b -> c -> d
If Haskell types were left-associative, then passing a single parameter to foo would yield the following invalid type:
((? -> b) -> c) -> d
You will then be forced to circumvent it by adding parentheses, which could hamper readability.

Understanding this type in Haskell?

I'm having some trouble understanding, how this type declaration works.
The type is: (a -> b) -> (b -> c) -> (c -> d) -> a -> d
So, to me I interpret this as a function that takes a function and that function takes another function which outputs a value d.
So, this is how I make my function:
Example :: (a -> b) -> (b -> c) -> (c -> d) -> a -> d
Example f g h x = f ( g ( h (x) )
I'd really appreciate it, if you guys could help me clarify. Thank you!
I think that you already know the theory behind the type you're writing, so I'll try to inject some intuitive way to read it (at least I hope so, your question is not totally clear to me).
When you read something like (a -> b) inside a type, that's a function, as you said. For example (Int -> Bool) is a function.
Let's make an example:
even :: Int -> Bool -- A more generic version of that is in the Prelude
even n = n `rem` 2 == 0
filter :: (Int -> Bool) -> [Int] -> [Int] -- And of that, too
filter _ [] = []
filter f (x:xs)
| f x = x : filter f xs
| otherwise = filter f xs
filteredEven :: [Int]
filteredEven = filter even [1..5] -- it gives [2, 4]
In this example we have a "high order function", a function that get another function and use it in some way.
In a function like the one you're defining you simply use 3 functions (and another parameter). But you can know more.
Each function you declare in the type accept a value returned from the previous one. So a possible solution is the one you have already showed. But the types are generic. There is not a total function that returns a generic value (where total means that it terminate always returning a value different from bottom if all the values are total and different by bottom, so it don't crash or return undefined, for example). So, if you wants a total function you have to have a way to generate the variables requested, from the context of the function (their parameters).
In the example before, using the names used by you, you have to return a value of type d. You only have a way to produce a value of that type, the h function. But to use the h function you have to get a value of type c. You only have the g function for that. But you need a value of type c. Fortunately you have the function f, that in exchange of a value of type a returns the value needed. We have this value (and don't have any other way to obtain a value of that type), so the function can be written. We can't in any way alter the values obtained (call multiple times the functions don't work, for purity and the fact that we have only a way to produce the values), so that's the only way to construct the function, if we wants it to be total:
Example (a -> b) -> (b -> c) -> (c -> d) -> a -> d
Example f g h x = h (g (f x)))
We can write the function in many other ways, but the results they give will be always the same (if Example, f, g and h are total and x is not bottom). So the type can express really well the function, because we can understand how the function works only looking at the type!

Algebraically interpreting polymorphism

So I understand the basic algebraic interpretation of types:
Either a b ~ a + b
(a, b) ~ a * b
a -> b ~ b^a
() ~ 1
Void ~ 0 -- from Data.Void
... and that these relations are true for concrete types, like Bool, as opposed to polymorphic types like a. I also know how to translate type signatures with polymorphic types into their concrete type representations by just translating the Church encoding according to the following isomorphism:
(forall r . (a -> r) -> r) ~ a
So if I have:
id :: forall a . a -> a
I know that it does not mean id ~ a^a, but it actually means:
id :: forall a . (() -> a) -> a
id ~ ()
~ 1
Similarly:
pair :: forall r . (a -> b -> r) -> r
pair ~ ((a, b) -> r) - > r
~ (a, b)
~ a * b
Which brings me to my question. What is the "algebraic" interpretation of this rule:
(forall r . (a -> r) -> r) ~ a
For every concrete type isomorphism I can point to an equivalent algebraic rule, such as:
(a, (b, c)) ~ ((a, b), c)
a * (b * c) = (a * b) * c
a -> (b -> c) ~ (a, b) -> c
(c^b)^a = c^(b * a)
But I don't understand the algebraic equality that is analogous to:
(forall r . (a -> r) -> r) ~ a
This is the famous Yoneda lemma for the identity functor.
Check this post for a readable introduction, and any category theory textbook for more.
Briefly, given f :: forall r. (a -> r) -> r you can apply f id to get an a, and conversely, given x :: a you can take ($x) to get forall r. (a -> r) -> r.
These operations are mutually inverse. Proof:
Obviously ($x) id == x. I will show that
($(f id)) == f,
since functions are equal when they are equal on all arguments, let's take x :: a -> r and show that
($(f id)) x == f x i.e.
x (f id) == f x.
Since f is polymorphic, it works as a natural transformation; this is the naturality diagram for f:
f_A
Hom(A, A) → A
(x.) ↓ ↓ x
Hom(A, R) → R
f_R
So x . f == f . (x.).
Plugging identity, (x . f) id == f x. QED
(Rewritten for clarity)
There seem to be two parts to your question. One is implied and is asking what the algebraic interpretation of forall is, and the other is asking about the cont/Yoneda transformation, which sdcvvc's answer already covered pretty well.
I'll try to address the algebraic interpretation of forall for you. You mention that A -> B is B^A but I'd like to take that a step further and expand it out to B * B * B * ... * B (|A| times). Although we do have exponentiation as a notation for repeated multiplication like that, there's a more flexible notation, ∏ (uppercase Pi) representing arbitrary indexed products. There are two components to a Pi: the range of values we want to multiply over, and the expression that we're multiplying out. For example, at the value level, you might express the factorial function as fact i = ∏ [1..i] (λx -> x).
Going back to the world of types, we can view the exponentiation operator in the A -> B ~ B^A correspondence as a Pi: B^A ~ ∏ A (λ_ -> B). This says that we're defining an A-ary product of Bs, such that the Bs cannot depend on the particular A we've chosen. Sure, it's equivalent to plain exponentiation, but it lets us move up to cases in which there is a dependence.
In the most general case, we get dependent types, like what you see in Agda or Coq: in Agda syntax, replicate : Bool -> ((n : Nat) -> Vec Bool n) is one possible application of a Pi type, which could be expressed more explicitly as replicate : Bool -> ∏ Nat (Vec Bool), or further as replicate : ∏ Bool (λ_ -> ∏ Nat (Vec Bool)).
Note that as you might expect from the underlying algebra, you can fuse both of the ∏s in the definition of replicate above into a single ∏ ranging over the cartesian product of the domains: ∏ Bool (\_ -> ∏ Nat (Vec Bool)) is equivalent to ∏ (Bool, Nat) (λ(_, n) -> Vec Bool n) just like it would be at the "value level". This is simply uncurrying from the perspective of type theory.
I do realize your question was about polymorphism, so I'll stop going on about dependent types, but they are relevant: forall in Haskell is roughly equivalent to a ∏ with a domain over the type (kind) of types, *. Indeed, the function-like behavior of polymorphism can be observed directly in GHC core, which types them as capital lambdas (Λ). As such, a polymorphic type like forall a. a -> a is actually just ∏ * (Λ a -> (a -> a)) (using the Λ notation now that we distinguish between types and values), which can be expanded out to the infinite product (Bool -> Bool, Int -> Int, () -> (), (Int -> Bool) -> (Int -> Bool), ...) for every possible type. Instantiation of the type variable is simply projecting out the suitable element from the *-ary product (or applying the type function).
Now, for the big piece I missed in my original version of this answer: parametricity. Parametricity can be described in several different ways, but none of the ones I know of (viewing types as relations, or (di)naturality in category theory) really has a very algebraic interpretation. For our purposes, though, it boils down to something fairly simple: you can't pattern-match on *. I know that GHC lets you do that at the type level with type families, but you can only cover a finite chunk of * when doing that, so there are necessarily always points at which your type family is undefined.
What this means, from the point of view of polymorphism, is that any type function F we write in ∏ * F must either be constant (i.e., completely ignore the type it was polymorphic over) or pass the type through unchanged. Thus, ∏ * (Λ _ -> B) is valid because it ignores its argument, and corresponds to forall a. B. The other case is something like ∏ * (Λ x -> Maybe x), which corresponds to forall a. Maybe a, which doesn't ignore the type argument, but only "passes it through". As such, a ∏ A that has an irrelevant domain A (such as when A = *) can be seen as more of an A-ary indexed intersection (picking the common elements across all instantiations of the index), rather than a product.
Crucially, at the value level, the rules of parametricity prevent any funny behavior that might suggest the types are larger than they really are. Because we don't have typecase, we can't construct a value of type forall a. B that does something different based on what a was instantiated to. Thus, although the type is technically a function * -> B, it is always a constant function, and is thus equivalent to a single value of B. Using the ∏ interpretation, it is indeed equivalent to an infinite *-ary product of Bs, but those B values must always be identical, so the infinite product is effectively as big as a single B.
Similarly, although ∏ * (Λ x -> (x -> x)) (a.k.a., forall a. a -> a) is technically equivalent to an infinite product of functions, none of those functions can inspect the type, so all are constrained to only return their input value and not do any funny business like (+1) : Int -> Int when instantiated to Int. Because there is only one (assuming a total language) function that can't inspect the type of its argument but must return a value of that same type, the infinite product is thus just as large as a single value.
Now, about your direct question on (forall r . (a -> r) -> r) ~ a. First, let's express your ~ operator more formally. It's really isomorphism, so we need two functions going back and forth, and an argument that they're inverses.
data Iso a b = Iso
{ to :: a -> b
, from :: b -> a
-- proof1 :: forall x. to (from x) == x
-- proof2 :: forall x. from (to x) == x
}
and now we express your original question in more formal terms. Your question amounts to constructing a term of the following (impredicative, so GHC has trouble with it, but we'll survive) type:
forall a. Iso (forall r. (a -> r) -> r) a
Which, using my earlier terminology, amounts to ∏ * (Λ a -> Iso (∏ * (Λ r -> ((a -> r) -> r))) a). Once again we have an infinite product that can't inspect its type argument. By handwaving, we can argue that the only possible values considering the parametricity rules (the other two proofs are respected automatically) for to and from are ($ id) and flip id.
If this feels unsatisfying, it's probably because the algebraic interpretation of forall didn't really add anything to the proof. It's really just plain old type theory, but I hope I was able to provide something that feels a little less categorical than the Yoneda form of it. It's worth noting that we don't actually need to use parametricity to write proof1 and proof2 above, though. Parametricity only enters the picture when we want to state that ($ id) and flip id are our only options for to and from (which we can't prove in Agda or Coq, for that reason).
To (attempt to) answer the actual question (which is less interesting than the answers to the broader issues raised), the question is ill formed because of a "type error"
Either ~ (+)
(,) ~ (*)
(->) b ~ flip (^)
() ~ 1
Void ~ 0
These all map types to integers, and type constructors to functions on naturals. In a sense, you have a functor from the category of types to the category of naturals. In the other direction, you "forget" stuff, since the types preserve algebraic structure while the naturals throw it away. I.e. given Either () () you can get a unique natural, but given that natural, you can get many types.
But this is different:
(forall r . (a -> r) -> r) ~ a
It maps a type to another type! It is not part of the above functor. It's just an isomorphism within the category of types. So let's give that a different symbol, <=>
Now we have
(forall r . (a -> r) -> r) <=> a
Now you note that we can not only send types to nats and arrows to arrows, but also some isomorphisms to other isomorphisms:
(a, (b, c)) <=> ((a, b), c) ~ a * (b * c) = (a * b) * c
But something subtle is going on here. In a sense, the latter isomorphism on pairs is true because the algebraic identity is true. This is to say that the "isomorphism" in the latter simply means that the two types are equivalent under the image of our functor to the nats.
The former isomorphism we need to prove directly, which is where we start to get to the underlying question -- is given our functor to the nats, what does forall r. map to? But the answer is that forall r. is neither a type, nor a meaningful arrow between types.
By introducing forall, we have moved away from first order types. There's no reason to expect that forall should fit in our above Functor, and indeed, it doesn't.
So we can explore, as others have above, why the isomorphism holds (which is itself very interesting) -- but in doing so we've abandoned the algebraic core of the question. A question which can be answered, I think, is, given the category of higher-order types and constructors as arrows between them, what is there meaningful Functor to?
Edit:
So now I have another approach which shows why adding polymorphism makes things go nuts. We start by asking a simpler question -- does a given polymorphic type have zero or more than zero inhabitants? This is the type inhabitation problem, and winds up being, via Curry-Howard, a problem in modified realizability, since it's the same thing as asking if a formula in some logic is realizable in an appropriate computational model. Now as that page explains, this is decidable in the simply typed lambda calculus but is PSPACE-complete. But once we move to anything more complicated, by adding polymorphism for example and going to System F, then it goes to undecidable!
So, if we can't decide if an arbitrary type is inhabited at all, then we clearly can't decide how many inhabitants it has!
It's an interesting question. I don't have a full answer, but this was too long for a comment.
The type signature (forall r. (a -> r) -> r) can be expressed as me saying
For any type r that you care to name, if you give me a function that takes a and produces an r, then I will give you back an r.
Now, this has to work for any type r, but it can be a specific type a. So the way for me to pull of this neat trick is to have an a sitting around somewhere, that I feed to the function (which produces an r for me) and then I hand that r back to you.
But if I have an a sitting around, I could give it to you:
If you give me a 1, I'll give you an a.
which corresponds to the type signature 1 -> a or simply a. By this informal argument we have
(forall r. (a -> r) -> r) ~ a
The next step would be to generate the corresponding algebraic expression, but I'm not clear on how the algebraic quantities interact with the universal quantification. We may need to wait for an expert!
A few links to the nLab:
Universal quantifier, corresponds to dependent product.
Existential quantifier, corresponds to dependent sum (dependent coproduct).
Thus, in settings of category theory:
Type | Modeled¹ as | In category
-------------------+---------------------------+-------------
Unit | Terminal object | CCC
Bottom | Initial object |
Record | Product |
Union | Sum (coproduct) |
Function | Exponential |
-------------------+---------------------------+-------------
Dependent product² | Right adjoint to pullback | LCCC
Dependent sum | Left adjoint to pullback |
¹) in appropriate category ─ CCC for total and non-polymorphic subset of Haskell (link), CPO for non-total traits of Haskell (link), LCCC for dependently typed languages.
²) forall quantification is a special case of dependent product:
∀(x :: *). y[x] ~ ∏(x : Set)y[x]
where Set is the universe of all small types.

Resources