While learning about the Yoneda lemma, I came across the following encoding of the underlying natural isomorphism in Haskell:
forward :: Functor f => (forall r . (a -> r) -> f r) -> f a
forward f = f id
backward :: Functor f => f a -> (forall r. (a -> r) -> f r)
backward x f = fmap f x
I tried to simplify the implementation of backward to flip fmap but failed as the latter has type f a -> (a -> r) -> f r.
From here on I'm stuck in pinpointing what precisely are the differences between the two implementations. More so as applying either function to a concrete functor yields the same result type:
ghci> :t bw (Just "")
bw (Just "") :: (String -> r) -> Maybe r
ghci> :t (flip fmap) (Just "")
(flip fmap) (Just "") :: (String -> r) -> Maybe r
Questions:
What exactly is the difference between the two?
Is there something I can do with one that can't be done with the other?
Why is the universal quantification needed at all for backward?
In Haskell we write lambdas for the values we pass:
id :: forall a . a->a
id = \x -> x
but when we compile to the lower-level Core language, which is closed to System F, we also see type lambdas. The code becomes more like
id = \ #a (x :: #a) -> x
which means: the caller should choose and pass a type #a, then a value of that type x :: #a and finally receive that value back x.
Now, your issue arises from the two (isomorphic) types
forall a . T -> U a
-- vs
T -> forall a . U a
Here, T does not depend on a. The two types are indeed isomorphic, and at the lower level it's just a matter of "flipping" the term-lambda with the type-lambda:
\ #a (x :: T) -> ... -- has the former type
-- vs
\ (x :: T) #a -> ... -- has the latter
Since in Haskell we do not usually write or see the type-lambdas, it is hard to distinguish between these. Further, when the compiler performs type inference it will (as far as I can understand) infer the first one, where the type-lambdas are all at the beginning.
What exactly is the difference between the two?
Not much. They have isomorphic types, after all. One has the forall a at the topmost level, while the other has it in another position.
Once you apply then to arguments as you did, GHC will automatically choose the right place to add the inferred #a argument, so you won't easily notice the difference. The two calls are of these forms:
f #r x
-- vs
f x #r
On top, GHC can re-generalize the type, hence we obtain
(\#r -> f #r x) :: forall r . ....
(\#r -> f x #r) :: forall r . ....
Is there something I can do with one that can't be done with the other?
Not really.
Why is the universal quantification needed at all for backward?
Usually, when defining an isomorphism between A and B we want
forw :: A -> B
back :: B -> A
If either type has a forall (say, A = forall r . V r) then we get a forall in that position. Note that the type for back in this case is also isormoprhic to
forall r . B -> V r
which is what we would get if we omitted the quantifier (and let Haskell re-add it implicitly at the top-level).
(By contrast, the quantifier in the type of forw :: (forall r . V r) -> B can not be moved to the top level.)
Related
I'm reading the second edition of Programming in Haskell and I've came across this sentence:
... there is only one way to make any given parameterised type into a functor, and hence any function with the same polymorphic type as fmap must be equal to fmap.
This doesn't seem right to me, though. I can see that there is only one valid definition of fmap for each Functor type, but surely I could define any number of functions with the type (a -> b) -> f a -> f b which aren't equivalent to each other?
Why is this the case? Or, is it just a mistake by the author?
You've misread what the author was saying.
...any function with the same polymorphic type as fmap...
This means, any function with the signature
Functor f => (a -> b) -> f a -> f b
must be equivalant to fmap. (Unless you permit bottom values, of course.)
That statement is true; it can be seen quite easily if you try to define such a function: because you know nothing about f except that it's a functor, the only way to obtain a non-⊥ f b value is by fmapping over the f a one.
What's a bit less clear cut is the logical implication in the quote:
there is only one way to make any given parameterised type into a functor, and hence any function with the same polymorphic type as fmap must be equal to fmap.
I think what the author means there is, because a Functor f => (a -> b) -> f a -> f b function must necessarily invoke fmap, and because fmap is always the only valid functor-mapping for a parameterised type, any Functor f => (a -> b) -> f a -> f b will indeed also in practice obey the functor laws, i.e. it will be the fmap.
I agree that the “hence” is a bit badly phrased, but in principle the quote is correct.
I think that the quote refers to this scenario. Assume we define a parameterized type:
data F a = .... -- whatever
for which we can write not only one, but two fmap implementations
fmap1 :: (a -> b) -> F a -> F b
fmap2 :: (a -> b) -> F a -> F b
satisfying the functor laws
fmap1 id = id
fmap1 (f . g) = fmap1 f . fmap1 g
fmap2 id = id
fmap2 (f . g) = fmap2 f . fmap2 g
Under these assumptions, we have that fmap1 = fmap2.
This is a theoretical consequence of the "free theorem" associated to fmap's polymorphic type (see the comment under Lemma 1).
Pragmatically, this ensures that the instance we obtain from deriving Functor is the only possible one.
It is a mistake. Here's some examples of functions with the same type as fmap for lists that are not fmap:
\f -> const []
\f -> concatMap (replicate 2 . f)
\f -> map (f . head) . chunksOf 2
\f -> map f . reverse
There are many more. In general, given a function ixf from list lengths to lists of numbers no bigger than that length (that is, valid indices into the list), we can build
maybeIt'sFmapLol :: (Int -> [Int]) -> (a -> b) -> [a] -> [b]
maybeIt'sFmapLol ixf elemf xs = [map elemf xs !! ix | ix <- ixf (length xs)]
Use suitably lazy variants of Int to handle infinite lists. A similar function schema can be cooked up for other container-like functors.
In Haskell, there are many examples of higher kinded polymorphism when dealing with ad hoc polymorphism, such as Monad and Functor. However, I cannot think of any examples of this for parametric polymorphism.
Is this possible, and if so, can I have an example of one which is useful?
If you still allow typeclass constraints, then the answer is, sure! E.g. I'd still call something like
normalise :: (Foldable f, Functor f, Fractional n) => f n -> f n
normalise v = fmap (/sum v) V
parametric polymorphism. But I suppose that's not what you have in mind.
Another thing that's obviously possible is to just contrain types to have a particular form _ _, like
hmap :: (f a -> f b) -> [f a] -> [f b]
hmap = map
This isn't exactly remarkable, but it could possibly be useful in some applications as a subtle hint to the type checker. In fact, this is one way you can solve the phantom argument problem: instead of
class LengthyList l where minimumLength :: l a -> Int
instance LengthyList [] where minimumLength _ = 0
instance LengthyList NonEmpty where minimumLength _ = 1
you might make the signature
minimumLength :: p (l a) -> Int
aka
minimumLength :: proxy (l a) -> Int
Thereby you still pass in the type-information of l, but guarantee that the implementation cannot try to evaluate the argument at runtime.
The standard way to do this is however
minimumLength :: Tagged (l a) Int
or
minimumLength :: Proxy (l a) -> Int
Generally though, there's nothing you could do with f a that couldn't also be done with fa, so essentially you could rewrite the type of any such higher-order parametric function to a first-order parametrically polymorphic one.
You totally can do this. A type synonym pigworker is fond of:
type f ~> g = forall a . f a -> g a
This, for reasons I don't actually know, represents a natural transformation (whatever exactly that is) between functors f and g. But a function can take an argument of type f ~> g and apply it to as many types f a as it likes. Using non-regular types (specifically, higher-order nested data types, as Ralf Hinze, for example, explored in Numerical Representations as Higher-Order Nested Datatypes), it could be applied to an unbounded number of different types.
A contrived example:
{-# LANGUAGE RankNTypes, TypeOperators #-}
type f ~> g = forall a . f a -> g a
data Two g a = Two (g a) (g a)
data Foo f a = This (f a)
| That (Foo (Two f) a)
hello :: (f ~> g) -> Foo f a -> Foo g a
hello t (This fa) = This (t fa)
hello t (That buh) =
That (hello (\(Two x y) -> Two (t x) (t y)) buh)
hello is polymorphic in the types f and g, each of which has kind * -> *. † I believe that converting this to use only types of kind * could require non-trivial refactoring.
†In fact, if you enable PolyKinds, f and g will each have a polykinded type k -> *.
One example I'm quite fond of is the foldr operator for lists indexed by their length: it is parametrically polymorphic over a predicate p of kind Nat -> * and guarantees that if you apply it to a list of length m then you get back a proof of p m.
This corresponds to this type:
foldr :: forall a (p :: Nat -> *).
(forall n. a -> p n -> p ('Succ n)) ->
p 'Zero ->
forall n. Vec a n -> p n
This extra precision makes it possible to implement e.g. append using foldr rather than having to proceed by pattern-matching.
append :: forall a m n. Vec a m -> Vec a n -> Vec a (m :+: n)
I've uploaded a complete gist with all the right language extensions turned on and the code corresponding to these types in case you want to peek.
I'm trying to write a function (called hide here), which can apply a sufficiently polymorphic function inside an existential wrapper (or lift functions to work on wrappers with hidden types; hence "hide"):
{-# LANGUAGE GADTs
, RankNTypes
#-}
data Some f
where Some :: f a -> Some f
hide :: (forall a. f a -> g b) -> Some f -> Some g
hide f (Some x) = Some (f x)
data Phantom a = Phantom
cast :: Phantom a -> Phantom b
cast Phantom = Phantom
works :: Some Phantom -> Some Phantom
works = hide cast
doesn't :: Functor f => Some f -> Some f
doesn't = hide (fmap $ \x -> [x])
{-
foo.hs:23:17:
Couldn't match type ‘b0’ with ‘[a]’
because type variable ‘a’ would escape its scope
This (rigid, skolem) type variable is bound by
a type expected by the context: f a -> f b0
at foo.hs:23:11-33
Expected type: f a -> f b0
Actual type: f a -> f [a]
In the first argument of ‘hide’, namely ‘(fmap $ \ x -> [x])’
In the expression: hide (fmap $ \ x -> [x])
In an equation for ‘doesn't’: doesn't = hide (fmap $ \ x -> [x])
Failed, modules loaded: none.
-}
but :: Functor f => Some f -> Some f
but = hide' (fmap $ \x -> [x])
where hide' :: (forall a. f a -> g [a]) -> Some f -> Some g
hide' f (Some x) = Some (f x)
So I pretty much understand why this is happening; works shows that hide does in fact work when the return type is completely unrelated to the input type, but in doesn't I'm calling hide with an argument of type a -> [a]. hide is supposed to get to "choose" the type a (RankNTypes), but b is ordinarily polymorphic. When b in fact depends on a, a could leak out.
But in the context where I'm actually calling it, a doesn't in fact leak out; I immediately wrap it up in Some. And in fact I can write an alternate hide' that accepts specifically a -> [a] functions and works with the exact same implementation, just a different type signature.
Is there any way I can type the implementation hide f (Some x) = Some (f x) so that it works more generally? Really I'm interested in lifting functions with type a -> q a, where q is some arbitrary type function; i.e. I expect the return type to depend on a, but I don't care how it does so. There probably are use cases where q a is a constant (i.e. the return type doesn't depend on a), but I imagine they'll be much rarer.
This example is pretty silly, obviously. In my real use case I have a GADT Schema a that roughly speaking represents types in an external type system; the phantom parameter gives a Haskell type that could be used to represent values in the external type system. I need that phantom parameter to keep everything type safe, but sometimes I construct Schemas based on runtime data, in which case I don't know what that parameter type is.
So I appear to need another type which is agnostic about the type parameter. Rather than make (yet) another parallel type, I was hoping to use a simple existential wrapper like Some to construct it from Schema, and be able to lift functions of type forall a. Schema a -> Schema b to Some Schema -> Some Schema. So if I have an XY problem and I'd be better of using some other means of passing around Schema a for unknown a, that would also solve my problem.
As David Young says, you can write
hide' :: (forall a. f a -> g (q a)) -> Some f -> Some g
hide' f (Some x) = Some (f x)
does :: Functor f => Some f -> Some f
does = hide' (fmap (:[]))
but instead of making hide fmap-like, you can make it bind-like:
hide'' :: (forall a. f a -> Some g) -> Some f -> Some g
hide'' f (Some x) = f x
does :: Functor f => Some f -> Some f
does = hide'' (Some . fmap (:[]))
But this is a bit boilerplateable.
Or, more generally
elim :: (forall a. f a -> c) -> Some f -> c
elim f (Some x) = f x
I'm not sure how useful this is for your larger use-case as you'd have to refactor all your existing operations to use continuation passing style, but continuations can be used to implement a hide that works for both of your examples and keeps b completely generic.
hide :: (forall r a. f a -> (forall b. g b -> r g) -> r g) -> Some f -> Some g
hide f (Some x) = f x Some
cast :: Phantom a -> (forall b. Phantom b -> r Phantom) -> r Phantom
cast Phantom f = f Phantom
works :: Some Phantom -> Some Phantom
works = hide cast
alsoWorks :: Functor f => Some f -> Some f
alsoWorks = hide (\a f -> f $ fmap (\x -> [x]) a)
You can make it somewhat nicer by factoring out the CPS-conversion which allows you to more easily use existing functions like your original cast:
hide :: (forall r a. f a -> (forall b. g b -> r g) -> r g) -> Some f -> Some g
hide f (Some x) = f x Some
cps :: (f a -> g b) -> (f a -> (forall c. g c -> r) -> r)
cps f a c = c (f a)
cast :: Phantom a -> Phantom b
cast Phantom = Phantom
works :: Some Phantom -> Some Phantom
works = hide $ cps cast
alsoWorks :: Functor f => Some f -> Some f
alsoWorks = hide $ cps $ fmap (\x -> [x])
The following question relates to Recursive algebraic data types via polymorphism in Haskell.
Recursive algebraic data types can be realized in any language with the capabilities of System F using universal parametric polymorphism. For example, the type of natural numbers can be introduced (in Haskell) as
newtype Nat = Nat { runNat :: forall t. (t -> (t -> t) -> t) }
with the 'usual' natural number n being realized as
\ x0 f -> f(f(...(f x0)...))
with n iterations of f used.
Similarly, the type of Booleans can be introduced as
newtype Bool = Bool { runBool :: forall t. t -> t -> t }
with the expected values 'true' and 'false' being realized as
true = \ t f -> t
false = \ t f -> f
Q: Are all terms of type Bool or Nat or any other potentially recursive algebraic data type (encoded in this way) of the above form, up to some reduction rules of operational semantics?
Example 1 (Natural numbers): Is any term of type forall t. t -> (t -> t) -> t 'equivalent' in some sense to a term of the form \ x0 f -> f (f ( ... (f x0) ... ))?
Example 2 (Booleans): Is any term of type forall t. t -> t -> t 'equivalent' in some sense to either \ t f -> t or \ t f -> f?
Addendum (internal version): In case the language under consideration is even capable of expressing propositional equality, this meta-mathematical question could be internalized as follows, and I would be very happy if someone would come up with a solution for it:
For any functor m we can define the universal module and some decoding-encoding function on it as follows:
type ModStr m t = m t -> t
UnivMod m = UnivMod { univProp :: forall t. (ModStr m t) -> t }
classifyingMap :: forall m. forall t. (ModStr m t) -> (UnivMod m -> t)
classifyingMap f = \ x -> (univProp x) f
univModStr :: (Functor m) => ModStr m (UnivMod m)
univModStr = \ f -> UnivMod $ \ g -> g (fmap (classifyingMap g) f)
dec_enc :: (Functor m) => UnivMod m -> UnivMod m
dec_enc x = (univProp x) univModStr
Q: In case the language is capable of expressing this: is the equality type dec_enc = id inhabited?
In System F (AKA λ2), all inhabitants of ∀α.α→α→α are indeed λ-equal to K or K*.
First, if M : ∀α.α→α→α then it has normal form N (since System F is normalizing) and by subject reduction theorem (see Barendregt: Lambda calculi with types) also N : ∀α.α→α→α.
Let's examine how these normal forms can look like. (We'll be using Generation lemma for λ2, see the Barendregt's book for formal details.)
If N is a normal form, that N (or any its subexpression) must be in head normal form, that is an expression of the form λx1 ... xn. y P1 ... Pk, where n and/or k can be also 0.
For the case of N, there must be at least one λ, because initially we don't have any variable bound in the typing context that would take the place of y. So N = λx.U and x:α |- U:α→α.
Now again there must be at least one λ in the case of U, because if U were just y P1 ... Pk then y would have a function type (even for k=0 we'd need y:α→α), but we have just x:α in the context. So N = λxy.V and x:α, y:α |- V:α.
But V can't be λ.., because then it'd have function type τ→σ. So V must be just of the form z P1 ... Pk, but since we don't have any variable of function type in the context, k must be 0 and therefore V can be only x or y.
So there are only two terms in normal form of type ∀α.α→α→α: λxy.x and λxy.y and all other terms of this type are β-equal to one of these.
Using similar reasoning we can prove that all inhabitants of ∀α.α→(α→α)→α are β-equal to a Church numeral. (And I think that for type ∀α.(α→α)→α→α the situation is slightly worse; we also need η-equality, as λf.f and λfx.fx correspond to 1, but aren't β-equal, just βη-equal.)
If we disregard bottoms and unsafe stuff, then the only thing you can do universally with functions a -> a is compose them. However, that doesn't quite stop us at finite f (f ( ... (f x0) ... )) expressions: we also have the infinite composition infty x f = f $ infty x f.
Similarly, the only non-recursive boolean values are indeed \t _ -> t and \_ f -> f, but you can also tie knots here, like
blarg t f = blarg (blarg t f) (blarg f t)
I've just started playing a little with Haskell... I want to write a function of the same type of the identity. Obviously, not equivalent to it. That would be something like,
myfunction :: a -> a
I cannot come up with an example in which the parameter and the return type are the same and can be virtually anything (this excludes the possibility of using Haskell's Typeclasses).
This is impossible without using undefined as another commenter mentioned. Let's prove it by counter-example. Assume there were such a function:
f :: a -> a
When you say that's it not the same as id, that implies that you cannot define:
f x = x
However, consider the case where a is the type ():
f () = ...
The only possible result f could return would be (), but that would be the same implementation as id, therefore a contradiction.
The more sophisticated and rigorous answer is to show that the type a -> a must be isomorphic to (). When we say two types a and b are isomorphic, that means that we can define two functions:
fw :: a -> b
bw :: b -> a
... such that:
fw . bw = id
bw . fw = id
We can easily do this when the first type is a -> a and the second type is ():
fw :: (forall a . a -> a) -> ()
fw f = f ()
bw :: () -> (forall a . a -> a)
bw () x = x
We can then prove that:
fw . bw
= \() -> fw (bw ())
= \() -> fw (\x -> x)
= \() -> (\x -> x) ()
= \() -> ()
= id
bw . fw
= \f -> bw (fw f)
-- For this to type-check, the type of (fw f) must be ()
-- Therefore, f must be `id`
= \f -> id
= \f -> f
= id
When you prove two types are isomorphic, one thing you know is that if one type is inhabited by a finite number of elements, so must the other one. Since the type () is inhabited by exactly one value:
data () = ()
That means that the type (forall a . a -> a) must also be inhabited by exactly one value, which just so happens to be the implementation for id.
Edit: Some people have commented that the proof of the isomorphism is not sufficiently rigorous, so I'll invoke the Yoneda lemma, which when translated into Haskell, says that for any functor f:
(forall b . (a -> b) -> f b) ~ f a
Where ~ means that (forall b . (a -> b) -> f b) is isomorphic to f a. If you choose the Identity functor, this simplifies to:
(forall b . (a -> b) -> b) ~ a
... and if you choose a = (), this further simplifies to:
(forall b . (() -> b) -> b) ~ ()
You can easily prove that () -> b is isomorphic to b:
fw :: (() -> b) -> b
fw f = f ()
bw :: b -> (() -> b)
bw b = \() -> b
fw . bw
= \b -> fw (bw b)
= \b -> fw (\() -> b)
= \b -> (\() -> b) ()
= \b -> b
= id
bw . fw
= \f -> bw (fw f)
= \f -> bw (f ())
= \f -> \() -> f ()
= \f -> f
= id
So we can then use that to finally specialize the Yoneda isomorphism to:
(forall b . b -> b) ~ ()
Which says that any function of type forall b . b -> b is isomorphic to (). The Yoneda lemma provides the rigor that my proof was missing.
Let me formulate an answer that elaborates on dbaupp’s comment. Any function of type a -> a would also give rise to an function of type () -> (), so I will look at this subproblem first.
A usual semantics of Haskell types and functions would represent a type as a pointed chain-complete partial order, and functions as continuous functions. The type () is represented by the two element set {⊥,()} with the order ⊥⊏(). In plain set theory, there are 2^2=4 functions from this set onto itself, but only three of them are continuous:
f1: ⊥ ↦ ⊥, () ↦ ⊥,
f2: ⊥ ↦ ⊥, () ↦ (), and
f3: ⊥ ↦ (), () ↦ ().
So in our semantic model, there are three different functions of type () -> (). But which of them can be implemented in Haskell? All of them!
f1 _ = undefined (or f1 x = f1 x)
f2 x = x (or f2 = id)
f3 _ = () (or f3 = const ())
Looking at these definitions, you can see that f1 and f2 can also be used to define a function of type a -> a. As they do different things already on (), they are different. So we have at least two different functions of type a -> a.
In the above semantic model, there are many more functions of type a -> a, but these would not be expressible in Haskell (this is related to parametricity and Wadler’s Theorems for Free). A proper proof that f1 and f2 are the only such functions does not seem to be very easy, as it depends on what the Haskell language disallows (e.g. no pattern matching on the type of the argument).
Unless you are willing to use undefined or bottom (a non-terminating expression), there literally are no other functions that satisfy that type.
This is one of the large strengths of the Haskell type system. It's possible to strongly limit possible functions that can pass trough a compiler into the ones that are obviously correct. For an extreme example, see djinn -- it takes a type, and generates possible functions that match that type. Even for real, complex examples, the list is often very short.
The key here is to understand that we know nothing about a, especially we have no way to generate a new one or to transform it to something different. Hence we have no choice as returning it (or the bottom value). As soon as we have more information about a (e.g. a context bound), we can do more interesting things with it:
f :: Monoid a => a -> a
f _ = mempty
or
f :: Monoid a => a -> a
f x = x `mappend` x `mappend` x
Or if you have the choice like in f :: (a, a) -> a, you have two possible implementations (ignoring the bottom values again), but for f :: (a, b) -> a you are back to one implementation, which is the same as for fst: While it is valid to call f with a pair of identical types, e.g. f ("x", "y"), you can be sure that f behaves like fst, because in the implementation of f you have no way to test if both argument types might be the same. Similarly, there is only one non-bottom version of f :: (a -> b) -> a -> b.
Polymorphism limits the degrees of freedom, because you don't know anything about your arguments, and in some cases it boils down to one non-bottom version.
As others mentioned, no such other total function can exist. (If we don't limit ourselves to total functions then we can inhabit any type by undefined.)
I'll try to give a theoretical explanation based on the λ-calculus:
For simplicity, let's limit ourselves to λ-terms (to which we can translate any Haskell expression). For a λ-term M let's call A its head if M ≡ A N1 ... Nk and A isn't an application (k can be also zero). Note that if M is in normal form then A cannot be a λ-abstraction unless k = 0.
So let M :: a -> a be a λ-term in normal form. Since we have no variables in the context, M cannot be a variable and it cannot be an application. If it were, its head would have to be a variable. So M must be a λ-abstraction, it must be M ≡ λ(x:a).N.
Now N must be of type a, formally {x:a}⊢N:a. If N was a λ-abstraction, its type would be σ -> τ, which is not possible. If N was a function application then it's head would have to be a variable, and the only one we have in the context is x. But since x:a, we cannot apply x to anything, x P isn't tapeable for any P. So the only possibility is that N ≡ x. So, M must be λ(x:a).x.
(Please correct my English, if possible. In particular, I'm not sure how to use the subjunctive right).