Comparing equal types in a definition - lean

I'm fairly new to Lean, so apologies if this is obvious. I'm trying to learn Lean and category theory, by doing some category theory exercises in Lean. I have these definitions for arrows and categories:
variable {α : Type u}
inductive Arrow : α → α → Type u
| Id (x : α) : Arrow x x
| Comp (g : Arrow b c) (f : Arrow a b) : Arrow a c
notation a " -→ " b => Arrow a b
notation a " ∘ " b => Arrow.Comp a b
structure Category :=
(assoc {a b c d : α} : ∀ (f : a -→ b) (g : b -→ c) (k : c -→ d),
(k ∘ (g ∘ f)) = ((k ∘ g) ∘ f))
(unitl {a b : α} : ∀ (f : a -→ b), ((Arrow.Id b) ∘ f) = f)
(unitr {a b : α} : ∀ (f : a -→ b), (f ∘ (Arrow.Id a)) = f)
This all compiles fine, so I try to define a discrete category as follows:
def IsDiscrete :=
∀ (x y : α) (f : Arrow x y), x = y ∧ f = Arrow.Id x
The intent is to express "all arrows are identity arrows", but the compiler complains that f has type "Arrow x y", not "Arrow x x". Of course, the whole point is that if arrow f exists, then x = y, so the comparison between f and Id x is sensible. How do I express this in Lean?
Alternatively, is there a better way to express arrows and/or categories in Lean? If so, why is that way better?

Related

How to prove this lemma about eta expansion?

(here is a gist of my work so far.)
Coq comes with a rule about eta reduction, allowing us to prove something like the following:
Variables X Y Z : Type.
Variable f : X -> Y -> Z.
Lemma eta1 : (fun x => f x) = f.
Proof.
auto.
Qed.
The eta rule is not merely an equality rewrite, so we can use it beneath binders as well:
Lemma eta2 : (fun x y => f x y) = f.
Proof.
auto.
Qed.
Of course, one would expect that you could generalize this to an f of arbitrary arity. Here is my naïve attempt:
Require Import Coq.Lists.List.
Import ListNotations.
Fixpoint toType(ts : list Type)(t : Type) : Type :=
match ts with
|[] => t
|u::vs => u -> (toType vs t)
end.
Compute toType [X;Y] Z.
(*
= X -> Y -> Z
: Type
*)
Fixpoint etaexpand(ts : list Type) : forall t : Type, toType ts t -> toType ts t :=
match ts as ts return forall t, toType ts t -> toType ts t with
|[] => fun t x => x
|u::vs => fun t f (x:u) => etaexpand vs t (f x)
end.
Compute etaexpand [X;Y] Z f.
(*
= fun (x : X) (x0 : Y) => f x x0
: toType [X; Y] Z
*)
Lemma etaexpand_id : forall ts t f, etaexpand ts t f = f.
Proof.
induction ts; intros.
auto.
simpl.
(*stuck here*)
I get stuck on the inductive step of the above lemma. Naturally, I want to rewrite using the inductive hypothesis, but I cannot since the relevant subterm occurs beneath a binder. Of course, I myself know that the inductive hypothesis should be usable beneath binders, since it's just a chain of eta rewrites. I'm wondering then if there's a clever way to state and convince Coq of this fact.
In case anyone's curious, here's the solution I came up with after some thought.
The key is to simultaneously prove the following "niceness" property for etaexpand ts t:
Definition nice{X Y}(F : Y -> Y) := (forall y, F y = y) -> forall f : X -> Y,
(fun x => F (f x)) = f.

Haskell dot (.) operator in de Morgan's law implementation

In this question, the author has written an implementation of de Morgan's laws in Haskell. I understand the implementations of notAandnotB, and notAornotB, but I'm struggling to understand the implementation of notAorB which is:
notAorB :: (Either a b -> c) -> (a -> c, b -> c)
notAorB f = (f . Left, f . Right)
Could someone explain how the (f . Left, f . Right) part works? I've seen the . operator used before, but with three arguments, not two.
Thank you in advance.
Recall that the definition of the . operator is (f . g) x = f (g x), i.e. f . g = \x -> f (g x) (syntactically, it is a binary operator, it is just that Haskell’s syntax sugar permits the latter definition to be restated as the former). So, the definition you have can be rephrased as
notAorB f = ((\x -> f (Left x)), (\y -> f (Right y)))
(this can be done mechanically by Lambdabot on #haskell, tell him to #unpl ‹expr›), or more verbosely
notAorB f = (lt, rt)
where lt x = f (Left x)
rt y = f (Right y)
As always, try writing down the types. If (.) :: ∀ s t u. (t -> u) -> (s -> t) -> s -> u, f :: Either a b -> c, Left :: ∀ p q. p -> Either p q, then f . Left or (.) f Left :: a -> c, etc.

What would be the methods of a bi-comonad?

While musing what more useful standard class to suggest to this one
class Coordinate c where
createCoordinate :: x -> y -> c x y
getFirst :: c x y -> x
getSecond :: c x y -> y
addCoordinates :: (Num x, Num y) => c x y -> c x y -> c x y
it occured me that instead of something VectorSpace-y or R2, a rather more general beast might lurk here: a Type -> Type -> Type whose two contained types can both be extracted. Hm, perhaps they can be extracted?
Turns out neither the comonad nor bifunctors package contains something called Bicomonad. Question is, would such a class even make sense, category-theoretically? Unlike Bimonad (which also isn't defined, and I couldn't really see how might look), a naïve definition seems plausible:
class Bifunctor c => Bicomonad c where
fst :: c x y -> x
snd :: c x y -> y
bidup :: c x y -> c (c x y) (c x y)
probably with the laws
fst . bidup ≡ id
snd . bidup ≡ id
bimap fst snd . bidup ≡ id
bimap bidup bidup . bidup ≡ bidup . bidup
but I find it disquieting that both fields of the result of bidup contain the same type, and there are quite a number of other, perhaps “better” conceivable signatures.
Any thoughts?
This is not an answer, but for Bimonad, how about this?
class Biapplicative p => Bimonad p where
(>>==) :: p a b -> (a -> b -> p c d) -> p c d
biap :: Bimonad p => p (a -> b) (c -> d) -> p a c -> p b d
biap p q = p >>== \ab cd -> q >>== \a c -> bipure (ab a) (cd c)
instance Bimonad (,) where
(a,b) >>== f = f a b
I don't know if this is categorically right/interesting, or even remotely useful, but it smells right from a Haskell perspective. Does it match your Bicomonad or something similar?

Is it possible to implement a function that returns an n-tuple on the lambda calculus?

An n-tuple on the lambda calculus is usually defined as:
1-tuple: λ a t . t a
1-tuple-fst: λ t . t (λ a . a)
2-tuple: λ a b t . t a b
2-tuple-fst: λ t . t (λ a b . a)
2-tuple-snd: λ t . t (λ a b . b)
3-tuple: λ a b c t . t a b c
3-tuple-fst: λ t . t (λ a b c . a)
3-tuple-snd: λ t . t (λ a b c . b)
3-tuple-trd: λ t . t (λ a b c . c)
... and so on.
My question is: is it possible to implement a function that receives a church number N and returns the corresponding N-tuple for any N? Also, is it possible to extend this function so it also returns the corresponding accessors? The algorithm can't use any form of recursion, including fixed-point combinators.
~
Edit: as requested, elaborating on what I've tried.
I want that function not to depend on recursion / fixed point combinators, so, the obvious way to do it would be using church-numbers for repetition. Said that, I have tried randomly testing many expressions, in order to learn how they grow. For example:
church_4 (λ a b c . a (b c))
Reduces to:
(λ a b c d e f . a ((((e d) c) b) a)))))
I've compared the reduction of many similar combinations church_4 (λ a b c . (a (b c))) to my desired results and noticed that I could implement the accessors as:
firstOf = (λ max n . (firstOf (sub max n) (firstOf n)))
access = (λ max idx t . (t (firstOf (sub max idx) (firstOf idx))))
Where sub is the subtraction operator and access church_5 church_2 means accessing the 3rd (because 2 is the 3rd natural) element of a 6-tuple.
Now, on the tuples. Notice that the problem is finding a term my_term such that, for example:
church_3 my_term
had the following normal form:
(λ a b c d t . ((((t a) b) c) d))
As you can see, I've almost found it, since:
church_3 (λ a b c . a (b c)) (λ a . a)
Reduces to:
(λ a b c d . (((a b) c) d))
Which is almost the result I need, except thatt is missing.
That is what I've tried so far.
Let's try to implement the n-ary tuple constructor. I shall also aim for a simple implementation, meaning that I try sticking to the elimination of natural numbers and tuples, and try to avoid using other (Church encoded) data structures.
My strategy is as follows:
Write a well-typed version of the function in a dependent language.
Translate it to untyped lambda calculus.
The reason for this is that I quickly get lost in untyped lambda calculus and I'm bound to make quite a few mistakes along the way, while the dependently typed environment puts me on rails. Also, proof assistants are just great help for writing any kind of code.
Step one
I use Agda. I cheat a bit with type-in-type. It makes Agda inconsistent, but for this problem proper type universes would be a huge pain, and it's very unlikely that we actually run into an inconsistency here anyway.
{-# OPTIONS --type-in-type #-}
open import Data.Nat
open import Data.Vec
We need a notion of n-ary polymorphic functions. We store the argument types in a vector of length n:
NFun : ∀ {n} → Vec Set n → Set → Set
NFun [] r = r
NFun (x ∷ ts) r = x → NFun ts r
-- for example, NFun (Nat ∷ Nat ∷ []) = λ r → Nat → Nat → r
We have the usual Church encoding for tuples. The constructors for n-ary tuples are n-ary functions returning a tuple.
NTup : ∀ {n} → Vec Set n → Set
NTup ts = ∀ {r} → NFun ts r → r
NTupCons : ℕ → Set
NTupCons n = ∀ ts → NFun {n} ts (NTup ts)
We'd like to have a function with type ∀ {n} → NTupCons n. We recurse on the Vec Set n parameter for the tuple constructor. The empty case is simple enough, but the cons case is a bit trickier:
nTupCons : ∀ {n} → NTupCons n
nTupCons [] x = x
nTupCons (t ∷ ts) x = ?
We need a NFun ts (NTup (t ∷ ts)) in place of the question mark. We know that nTupCons ts has type NFun ts (NTup ts), so we need to somehow get the former from the latter. We notice that what we need is just n-ary function composition, or in other words a functorial map over the return type of NFun:
compN : ∀ {n A B} (ts : Vec Set n) → (A → B) → NFun ts A → NFun ts B
compN [] f = f
compN (t ∷ ts) f g x = compN ts f (g x)
Now, we only need to get an NTup (t ∷ ts) from an NTup ts, and since we already have x with type t in scope, that's pretty easy:
nTupCons : ∀ {n} → NTupCons n
nTupCons [] x = x
nTupCons (t ∷ ts) x = compN ts consTup (nTupCons ts)
where
consTup : NTup ts → NTup (t ∷ ts)
consTup tup con = tup (con x)
Step two
We shall get rid of the Vec Set n-s and rewrite the functions so they iterate on the n parameters. However, simple iteration is not good for nTupCons, since that only provides us the recursive result (nTupCons ts), but we also need the current n index for compN (since we implement compN by iterating on n). So we write a helper that's a bit like a paramorphism. We also need Church encoded pairs here to pass up the Nat-s through the iteration:
zero = λ z s. z
suc = λ n z s. s (n z s)
fst = λ p. p (λ a b. a)
snd = λ p. p (λ a b. b)
-- Simple iteration has type
-- ∀ {A} → A → (A → A) → Nat → A
-- In contrast, we may imagine rec-with-n having the following type
-- ∀ {A} → A → (A → Nat → A) → Nat → A
-- We also pass the Nat index of the hypothesis to the "cons" case
rec-with-n = λ z f n .
fst (
n
(λ p. p z zero)
(λ hyp p. p (f (fst hyp) (snd hyp)) (suc (snd hyp))))
-- Note: I use "hyp" for "hypothesis".
The rest is straightforward to translate:
compN = λ n. n (λ f. f) (λ hyp f g x. hyp f (g x))
nTupCon =
rec-with-n
(λ x. x)
(λ hyp n. λ x. compN n (λ f g. f (g x)) hyp)
Let's test it for simple cases:
nTupCon zero =
(λ t. t)
nTupCon (suc zero) =
(λ hyp n. λ x. compN n (λ f g. f (g x)) hyp) (nTupCon zero) zero =
λ x. compN zero (λ f g. f (g x)) (λ t. t) =
λ x. (λ f g. f (g x)) (λ t. t) =
λ x. λ g. (λ t. t) (g x) =
λ x . λ g. g x =
λ x g . g x
nTupCon (suc (suc zero)) =
(λ hyp n. λ x. compN n (λ f g. f (g x)) hyp) (nTupCon (suc zero)) (suc zero) =
λ x. compN (suc zero) (λ f g. f (g x)) (λ a t. t a) =
λ x a. (λ f g. f (g x)) ((λ y t. t y) a) =
λ x a. (λ f g. f (g x)) (λ t. t a) =
λ x a g. (λ t. t a) (g x) =
λ x a g. g x a
It seems to work.
Let
foldargs = λ t n f z . (IsZero n) (t z) (λ a . foldargs t (pred n) f (f a z))
Then function
listofargs = λ n . foldargs id n pair null
returns reversed list of its args:
listofargs 5 a b c d e --> (e . (d . (c . (b . (a . null))))) or [e d c b a]
Function
apply = λ f l . (isnil l) f (apply (f (head l)) (tail l))
applies first argument (n-ary function) to arguments taken from the second argument (a list of length n):
apply f [a b c d e] --> f a b c d e
The rest is easy:
n-tuple = λ n . foldargs n-tuple' (Succ n) pair null
where
n-tuple' = λ l . apply (head l) (reverse (tail l))
Implementation of the other functions can be taken from wikipedia.
Recursion can be eliminated by Y-combinator.
reverse is simple.
UPD: Nonrecursive versions of the functions:
foldargs = Y (λ c t n f z . (IsZero n) (t z) (λ a . c t (pred n) f (f a z)))
apply = Y (λ c f l . (isnil l) f (c (f (head l)) (tail l)))
Y = λ f (λ x . f x x) (λ x . f x x)


I found it! There you go:
nTup = (λ n . (n (λ f t k . (f (λ e . (t (e k))))) (λ x . x) (λ x . x)))
Testing:
nTup n1 → (λ (λ (0 1)))
nTup n2 → (λ (λ (λ ((0 1) 2))))
nTup n3 → (λ (λ (λ (λ (((0 1) 2) 3)))))
nTup n4 → (λ (λ (λ (λ (λ ((((0 1) 2) 3) 4))))))
And so on. It stores the elements backwards but I don't think I'm gonna fix that - it looks more natural like so. The challenge was getting that 0 on the leftmost innermost paren. As I said, I could easily get both (0 (1 (2 (3 4)))) and ((((4 3) 2) 1) 0), but those don't work as tuples because that 0 is what holds the elements there.
Thank you all!
Edit: I've actually settled with this one:
nTup = (λ a . (a (λ b c d . (b (λ b . (c b d)))) (λ x . x) (λ x . x)))
Which preserves the correct order.
nTup n4 → (λ (λ (λ (λ (λ ((((0 4) 3) 2) 1))))))
If you can construct the n-tuples, you can easily access the ith index.
First, we need a type for the otherwise infinite untyped lambda functions. The extra X constructor allows us to inspect these functions by executing them.
import Prelude hiding (succ, pred)
data Value x = X x | F (Value x -> Value x)
instance (Show x) => Show (Value x) where
show (X x) = "X " ++ show x
show _ = "F"
It's convenient to be able to apply functions to each other.
ap :: Value x -> Value x -> Value x
ap (F f) = f
ap _ = error "Attempt to apply Value"
infixl 1 `ap`
If you are going to encode numbers with Church numerals you need some church numerals. We will also need subtraction to figure out how many additional arguments to skip when indexing into an n tuple.
idF = F $ \x -> x
zero = F $ \f -> idF
succ = F $ \n -> F $ \f -> F $ \x -> f `ap` (n `ap` f `ap` x)
one = succ `ap` zero
two = succ `ap` one
three = succ `ap` two
four = succ `ap` three
pred = F $ \n -> F $ \f -> F $ \x -> n `ap` (F $ \g -> F $ \h -> h `ap` (g `ap` f)) `ap` (F $ \u -> x) `ap` idF
subtractF = F $ \n -> (n `ap` pred)
The constant function drops its first argument. If we iterate the constant function some numeral number of times, it drops that many first arguments.
--drops the first argument
constF = F $ \f -> F $ \x -> f
-- drops i first arguments
constN = F $ \i -> i `ap` constF
We can make another constant function that drops its second argument. If we iterate it some numeral number of times, it drops that many second arguments.
-- drops the second argument
constF' = F $ \f -> F $ \a -> F $ \b -> f `ap` a
-- drops n second arguments
constN' = F $ \n -> n `ap` constF'
To index into an n tuple's ith index (starting at zero for the first index), we need to drop n-i-1 arguments off the end and drop i arguments off the start.
-- drops (n-i-1) last arguments and i first arguments
access = F $ \n -> F $ \i -> constN `ap` i `ap` (constN' `ap` (subtractF `ap` (succ `ap` i) `ap` n) `ap` idF)
We'll define few example tuples of fixed size
tuple1 = F $ \a -> F $ \t -> t `ap` a
tuple2 = F $ \a -> F $ \b -> F $ \t -> t `ap` a `ap` b
tuple3 = F $ \a -> F $ \b -> F $ \c -> F $ \t -> t `ap` a `ap` b `ap` c
which we can use to demonstrate that it is possible to generate the corresponding accessors.
main = do
print $ tuple1 `ap` (X "Example") `ap` (access `ap` one `ap` zero)
print $ tuple2 `ap` (X "Hello") `ap` (X "World") `ap` (access `ap` two `ap` zero)
print $ tuple2 `ap` (X "Hello") `ap` (X "World") `ap` (access `ap` two `ap` one)
print $ tuple3 `ap` (X "Goodbye") `ap` (X "Cruel") `ap` (X "World") `ap` (access `ap` three `ap` zero)
print $ tuple3 `ap` (X "Goodbye") `ap` (X "Cruel") `ap` (X "World") `ap` (access `ap` three `ap` one)
print $ tuple3 `ap` (X "Goodbye") `ap` (X "Cruel") `ap` (X "World") `ap` (access `ap` three `ap` two)
Running this outputs
X "Example"
X "Hello"
X "World"
X "Goodbye"
X "Cruel"
X "World"
To construct tuples you will need to iterate some function that adds arguments to a function instead of dropping them.

Zipper Comonads, Generically

Given any container type we can form the (element-focused) Zipper and know that this structure is a Comonad. This was recently explored in wonderful detail in another Stack Overflow question for the following type:
data Bin a = Branch (Bin a) a (Bin a) | Leaf a deriving Functor
with the following zipper
data Dir = L | R
data Step a = Step a Dir (Bin a) deriving Functor
data Zip a = Zip [Step a] (Bin a) deriving Functor
instance Comonad Zip where ...
It is the case that Zip is a Comonad though the construction of its instance is a little hairy. That said, Zip can be completely mechanically derived from Tree and (I believe) any type derived this way is automatically a Comonad, so I feel it ought to be the case that we can construct these types and their comonads generically and automatically.
One method for achieving generality for zipper construction is the use the following class and type family
data Zipper t a = Zipper { diff :: D t a, here :: a }
deriving instance Diff t => Functor (Zipper t)
class (Functor t, Functor (D t)) => Diff t where
data D t :: * -> *
inTo :: t a -> t (Zipper t a)
outOf :: Zipper t a -> t a
which has (more or less) shown up in Haskell Cafe threads and on Conal Elliott's blog. This class can be instantiated for the various core algebraic types and thus provides a general framework for talking about the derivatives of ADTs.
So, ultimately, my question is whether or not we can write
instance Diff t => Comonad (Zipper t) where ...
which could be used to subsume the specific Comonad instance described above:
instance Diff Bin where
data D Bin a = DBin { context :: [Step a], descend :: Maybe (Bin a, Bin a) }
...
Unfortunately, I've had no luck writing such an instance. Is the inTo/outOf signature sufficient? Is there something else needed to constrain the types? Is this instance even possible?
Like the childcatcher in Chitty-Chitty-Bang-Bang luring kids into captivity with sweets and toys, recruiters to undergraduate Physics like to fool about with soap bubbles and boomerangs, but when the door clangs shut, it's "Right, children, time to learn about partial differentiation!". Me too. Don't say I didn't warn you.
Here's another warning: the following code needs {-# LANGUAGE KitchenSink #-}, or rather
{-# LANGUAGE TypeFamilies, FlexibleContexts, TupleSections, GADTs, DataKinds,
TypeOperators, FlexibleInstances, RankNTypes, ScopedTypeVariables,
StandaloneDeriving, UndecidableInstances #-}
in no particular order.
Differentiable functors give comonadic zippers
What is a differentiable functor, anyway?
class (Functor f, Functor (DF f)) => Diff1 f where
type DF f :: * -> *
upF :: ZF f x -> f x
downF :: f x -> f (ZF f x)
aroundF :: ZF f x -> ZF f (ZF f x)
data ZF f x = (:<-:) {cxF :: DF f x, elF :: x}
It's a functor which has a derivative, which is also a functor. The derivative represents a one-hole context for an element. The zipper type ZF f x represents the pair of a one-hole context and the element in the hole.
The operations for Diff1 describe the kinds of navigation we can do on zippers (without any notion of "leftward" and "rightward", for which see my Clowns and Jokers paper). We can go "upward", reassembling the structure by plugging the element in its hole. We can go "downward", finding every way to visit an element in a give structure: we decorate every element with its context. We can go "around",
taking an existing zipper and decorating each element with its context, so we find all the ways to refocus (and how to keep our current focus).
Now, the type of aroundF might remind some of you of
class Functor c => Comonad c where
extract :: c x -> x
duplicate :: c x -> c (c x)
and you're right to be reminded! We have, with a hop and a skip,
instance Diff1 f => Functor (ZF f) where
fmap f (df :<-: x) = fmap f df :<-: f x
instance Diff1 f => Comonad (ZF f) where
extract = elF
duplicate = aroundF
and we insist that
extract . duplicate == id
fmap extract . duplicate == id
duplicate . duplicate == fmap duplicate . duplicate
We also need that
fmap extract (downF xs) == xs -- downF decorates the element in position
fmap upF (downF xs) = fmap (const xs) xs -- downF gives the correct context
Polynomial functors are differentiable
Constant functors are differentiable.
data KF a x = KF a
instance Functor (KF a) where
fmap f (KF a) = KF a
instance Diff1 (KF a) where
type DF (KF a) = KF Void
upF (KF w :<-: _) = absurd w
downF (KF a) = KF a
aroundF (KF w :<-: _) = absurd w
There's nowhere to put an element, so it's impossible to form a context. There's nowhere to go upF or downF from, and we easily find all none of the ways to go downF.
The identity functor is differentiable.
data IF x = IF x
instance Functor IF where
fmap f (IF x) = IF (f x)
instance Diff1 IF where
type DF IF = KF ()
upF (KF () :<-: x) = IF x
downF (IF x) = IF (KF () :<-: x)
aroundF z#(KF () :<-: x) = KF () :<-: z
There's one element in a trivial context, downF finds it, upF repacks it, and aroundF can only stay put.
Sum preserves differentiability.
data (f :+: g) x = LF (f x) | RF (g x)
instance (Functor f, Functor g) => Functor (f :+: g) where
fmap h (LF f) = LF (fmap h f)
fmap h (RF g) = RF (fmap h g)
instance (Diff1 f, Diff1 g) => Diff1 (f :+: g) where
type DF (f :+: g) = DF f :+: DF g
upF (LF f' :<-: x) = LF (upF (f' :<-: x))
upF (RF g' :<-: x) = RF (upF (g' :<-: x))
The other bits and pieces are a bit more of a handful. To go downF, we must go downF inside the tagged component, then fix up the resulting zippers to show the tag in the context.
downF (LF f) = LF (fmap (\ (f' :<-: x) -> LF f' :<-: x) (downF f))
downF (RF g) = RF (fmap (\ (g' :<-: x) -> RF g' :<-: x) (downF g))
To go aroundF, we strip the tag, figure out how to go around the untagged thing, then restore the tag in all the resulting zippers. The element in focus, x, is replaced by its entire zipper, z.
aroundF z#(LF f' :<-: (x :: x)) =
LF (fmap (\ (f' :<-: x) -> LF f' :<-: x) . cxF $ aroundF (f' :<-: x :: ZF f x))
:<-: z
aroundF z#(RF g' :<-: (x :: x)) =
RF (fmap (\ (g' :<-: x) -> RF g' :<-: x) . cxF $ aroundF (g' :<-: x :: ZF g x))
:<-: z
Note that I had to use ScopedTypeVariables to disambiguate the recursive calls to aroundF. As a type function, DF is not injective, so the fact that f' :: D f x is not enough to force f' :<-: x :: Z f x.
Product preserves differentiability.
data (f :*: g) x = f x :*: g x
instance (Functor f, Functor g) => Functor (f :*: g) where
fmap h (f :*: g) = fmap h f :*: fmap h g
To focus on an element in a pair, you either focus on the left and leave the right alone, or vice versa. Leibniz's famous product rule corresponds to a simple spatial intuition!
instance (Diff1 f, Diff1 g) => Diff1 (f :*: g) where
type DF (f :*: g) = (DF f :*: g) :+: (f :*: DF g)
upF (LF (f' :*: g) :<-: x) = upF (f' :<-: x) :*: g
upF (RF (f :*: g') :<-: x) = f :*: upF (g' :<-: x)
Now, downF works similarly to the way it did for sums, except that we have to fix up the zipper context not only with a tag (to show which way we went) but also with the untouched other component.
downF (f :*: g)
= fmap (\ (f' :<-: x) -> LF (f' :*: g) :<-: x) (downF f)
:*: fmap (\ (g' :<-: x) -> RF (f :*: g') :<-: x) (downF g)
But aroundF is a massive bag of laughs. Whichever side we are currently visiting, we have two choices:
Move aroundF on that side.
Move upF out of that side and downF into the other side.
Each case requires us to make use of the operations for the substructure, then fix up contexts.
aroundF z#(LF (f' :*: g) :<-: (x :: x)) =
LF (fmap (\ (f' :<-: x) -> LF (f' :*: g) :<-: x)
(cxF $ aroundF (f' :<-: x :: ZF f x))
:*: fmap (\ (g' :<-: x) -> RF (f :*: g') :<-: x) (downF g))
:<-: z
where f = upF (f' :<-: x)
aroundF z#(RF (f :*: g') :<-: (x :: x)) =
RF (fmap (\ (f' :<-: x) -> LF (f' :*: g) :<-: x) (downF f) :*:
fmap (\ (g' :<-: x) -> RF (f :*: g') :<-: x)
(cxF $ aroundF (g' :<-: x :: ZF g x)))
:<-: z
where g = upF (g' :<-: x)
Phew! The polynomials are all differentiable, and thus give us comonads.
Hmm. It's all a bit abstract. So I added deriving Show everywhere I could, and threw in
deriving instance (Show (DF f x), Show x) => Show (ZF f x)
which allowed the following interaction (tidied up by hand)
> downF (IF 1 :*: IF 2)
IF (LF (KF () :*: IF 2) :<-: 1) :*: IF (RF (IF 1 :*: KF ()) :<-: 2)
> fmap aroundF it
IF (LF (KF () :*: IF (RF (IF 1 :*: KF ()) :<-: 2)) :<-: (LF (KF () :*: IF 2) :<-: 1))
:*:
IF (RF (IF (LF (KF () :*: IF 2) :<-: 1) :*: KF ()) :<-: (RF (IF 1 :*: KF ()) :<-: 2))
Exercise Show that the composition of differentiable functors is differentiable, using the chain rule.
Sweet! Can we go home now? Of course not. We haven't differentiated any recursive structures yet.
Making recursive functors from bifunctors
A Bifunctor, as the existing literature on datatype generic programming (see work by Patrik Jansson and Johan Jeuring, or excellent lecture notes by Jeremy Gibbons) explains at length is a type constructor with two parameters, corresponding to two sorts of substructure. We should be able to "map" both.
class Bifunctor b where
bimap :: (x -> x') -> (y -> y') -> b x y -> b x' y'
We can use Bifunctors to give the node structure of recursive containers. Each node has subnodes and elements. These can just be the two sorts of substructure.
data Mu b y = In (b (Mu b y) y)
See? We "tie the recursive knot" in b's first argument, and keep the parameter y in its second. Accordingly, we obtain once for all
instance Bifunctor b => Functor (Mu b) where
fmap f (In b) = In (bimap (fmap f) f b)
To use this, we'll need a kit of Bifunctor instances.
The Bifunctor Kit
Constants are bifunctorial.
newtype K a x y = K a
instance Bifunctor (K a) where
bimap f g (K a) = K a
You can tell I wrote this bit first, because the identifiers are shorter, but that's good because the code is longer.
Variables are bifunctorial.
We need the bifunctors corresponding to one parameter or the other, so I made a datatype to distinguish them, then defined a suitable GADT.
data Var = X | Y
data V :: Var -> * -> * -> * where
XX :: x -> V X x y
YY :: y -> V Y x y
That makes V X x y a copy of x and V Y x y a copy of y. Accordingly
instance Bifunctor (V v) where
bimap f g (XX x) = XX (f x)
bimap f g (YY y) = YY (g y)
Sums and Products of bifunctors are bifunctors
data (:++:) f g x y = L (f x y) | R (g x y) deriving Show
instance (Bifunctor b, Bifunctor c) => Bifunctor (b :++: c) where
bimap f g (L b) = L (bimap f g b)
bimap f g (R b) = R (bimap f g b)
data (:**:) f g x y = f x y :**: g x y deriving Show
instance (Bifunctor b, Bifunctor c) => Bifunctor (b :**: c) where
bimap f g (b :**: c) = bimap f g b :**: bimap f g c
So far, so boilerplate, but now we can define things like
List = Mu (K () :++: (V Y :**: V X))
Bin = Mu (V Y :**: (K () :++: (V X :**: V X)))
If you want to use these types for actual data and not go blind in the pointilliste tradition of Georges Seurat, use pattern synonyms.
But what of zippers? How shall we show that Mu b is differentiable? We shall need to show that b is differentiable in both variables. Clang! It's time to learn about partial differentiation.
Partial derivatives of bifunctors
Because we have two variables, we shall need to be able to talk about them collectively sometimes and individually at other times. We shall need the singleton family:
data Vary :: Var -> * where
VX :: Vary X
VY :: Vary Y
Now we can say what it means for a Bifunctor to have partial derivatives at each variable, and give the corresponding notion of zipper.
class (Bifunctor b, Bifunctor (D b X), Bifunctor (D b Y)) => Diff2 b where
type D b (v :: Var) :: * -> * -> *
up :: Vary v -> Z b v x y -> b x y
down :: b x y -> b (Z b X x y) (Z b Y x y)
around :: Vary v -> Z b v x y -> Z b v (Z b X x y) (Z b Y x y)
data Z b v x y = (:<-) {cxZ :: D b v x y, elZ :: V v x y}
This D operation needs to know which variable to target. The corresponding zipper Z b v tells us which variable v must be in focus. When we "decorate with context", we have to decorate x-elements with X-contexts and y-elements with Y-contexts. But otherwise, it's the same story.
We have two remaining tasks: firstly, to show that our bifunctor kit is differentiable; secondly, to show that Diff2 b allows us to establish Diff1 (Mu b).
Differentiating the Bifunctor kit
I'm afraid this bit is fiddly rather than edifying. Feel free to skip along.
The constants are as before.
instance Diff2 (K a) where
type D (K a) v = K Void
up _ (K q :<- _) = absurd q
down (K a) = K a
around _ (K q :<- _) = absurd q
On this occasion, life is too short to develop the theory of the type level Kronecker-delta, so I just treated the variables separately.
instance Diff2 (V X) where
type D (V X) X = K ()
type D (V X) Y = K Void
up VX (K () :<- XX x) = XX x
up VY (K q :<- _) = absurd q
down (XX x) = XX (K () :<- XX x)
around VX z#(K () :<- XX x) = K () :<- XX z
around VY (K q :<- _) = absurd q
instance Diff2 (V Y) where
type D (V Y) X = K Void
type D (V Y) Y = K ()
up VX (K q :<- _) = absurd q
up VY (K () :<- YY y) = YY y
down (YY y) = YY (K () :<- YY y)
around VX (K q :<- _) = absurd q
around VY z#(K () :<- YY y) = K () :<- YY z
For the structural cases, I found it useful to introduce a helper allowing me to treat variables uniformly.
vV :: Vary v -> Z b v x y -> V v (Z b X x y) (Z b Y x y)
vV VX z = XX z
vV VY z = YY z
I then built gadgets to facilitate the kind of "retagging" we need for down and around. (Of course, I saw which gadgets I needed as I was working.)
zimap :: (Bifunctor c) => (forall v. Vary v -> D b v x y -> D b' v x y) ->
c (Z b X x y) (Z b Y x y) -> c (Z b' X x y) (Z b' Y x y)
zimap f = bimap
(\ (d :<- XX x) -> f VX d :<- XX x)
(\ (d :<- YY y) -> f VY d :<- YY y)
dzimap :: (Bifunctor (D c X), Bifunctor (D c Y)) =>
(forall v. Vary v -> D b v x y -> D b' v x y) ->
Vary v -> Z c v (Z b X x y) (Z b Y x y) -> D c v (Z b' X x y) (Z b' Y x y)
dzimap f VX (d :<- _) = bimap
(\ (d :<- XX x) -> f VX d :<- XX x)
(\ (d :<- YY y) -> f VY d :<- YY y)
d
dzimap f VY (d :<- _) = bimap
(\ (d :<- XX x) -> f VX d :<- XX x)
(\ (d :<- YY y) -> f VY d :<- YY y)
d
And with that lot ready to go, we can grind out the details. Sums are easy.
instance (Diff2 b, Diff2 c) => Diff2 (b :++: c) where
type D (b :++: c) v = D b v :++: D c v
up v (L b' :<- vv) = L (up v (b' :<- vv))
down (L b) = L (zimap (const L) (down b))
down (R c) = R (zimap (const R) (down c))
around v z#(L b' :<- vv :: Z (b :++: c) v x y)
= L (dzimap (const L) v ba) :<- vV v z
where ba = around v (b' :<- vv :: Z b v x y)
around v z#(R c' :<- vv :: Z (b :++: c) v x y)
= R (dzimap (const R) v ca) :<- vV v z
where ca = around v (c' :<- vv :: Z c v x y)
Products are hard work, which is why I'm a mathematician rather than an engineer.
instance (Diff2 b, Diff2 c) => Diff2 (b :**: c) where
type D (b :**: c) v = (D b v :**: c) :++: (b :**: D c v)
up v (L (b' :**: c) :<- vv) = up v (b' :<- vv) :**: c
up v (R (b :**: c') :<- vv) = b :**: up v (c' :<- vv)
down (b :**: c) =
zimap (const (L . (:**: c))) (down b) :**: zimap (const (R . (b :**:))) (down c)
around v z#(L (b' :**: c) :<- vv :: Z (b :**: c) v x y)
= L (dzimap (const (L . (:**: c))) v ba :**:
zimap (const (R . (b :**:))) (down c))
:<- vV v z where
b = up v (b' :<- vv :: Z b v x y)
ba = around v (b' :<- vv :: Z b v x y)
around v z#(R (b :**: c') :<- vv :: Z (b :**: c) v x y)
= R (zimap (const (L . (:**: c))) (down b):**:
dzimap (const (R . (b :**:))) v ca)
:<- vV v z where
c = up v (c' :<- vv :: Z c v x y)
ca = around v (c' :<- vv :: Z c v x y)
Conceptually, it's just as before, but with more bureaucracy. I built these using pre-type-hole technology, using undefined as a stub in places I wasn't ready to work, and introducing a deliberate type error in the one place (at any given time) where I wanted a useful hint from the typechecker. You too can have the typechecking as videogame experience, even in Haskell.
Subnode zippers for recursive containers
The partial derivative of b with respect to X tells us how to find a subnode one step inside a node, so we get the conventional notion of zipper.
data MuZpr b y = MuZpr
{ aboveMu :: [D b X (Mu b y) y]
, hereMu :: Mu b y
}
We can zoom all the way up to the root by repeated plugging in X positions.
muUp :: Diff2 b => MuZpr b y -> Mu b y
muUp (MuZpr {aboveMu = [], hereMu = t}) = t
muUp (MuZpr {aboveMu = (dX : dXs), hereMu = t}) =
muUp (MuZpr {aboveMu = dXs, hereMu = In (up VX (dX :<- XX t))})
But we need element-zippers.
Element-zippers for fixpoints of bifunctors
Each element is somewhere inside a node. That node is sitting under a stack of X-derivatives. But the position of the element in that node is given by a Y-derivative. We get
data MuCx b y = MuCx
{ aboveY :: [D b X (Mu b y) y]
, belowY :: D b Y (Mu b y) y
}
instance Diff2 b => Functor (MuCx b) where
fmap f (MuCx { aboveY = dXs, belowY = dY }) = MuCx
{ aboveY = map (bimap (fmap f) f) dXs
, belowY = bimap (fmap f) f dY
}
Boldly, I claim
instance Diff2 b => Diff1 (Mu b) where
type DF (Mu b) = MuCx b
but before I develop the operations, I'll need some bits and pieces.
I can trade data between functor-zippers and bifunctor-zippers as follows:
zAboveY :: ZF (Mu b) y -> [D b X (Mu b y) y] -- the stack of `X`-derivatives above me
zAboveY (d :<-: y) = aboveY d
zZipY :: ZF (Mu b) y -> Z b Y (Mu b y) y -- the `Y`-zipper where I am
zZipY (d :<-: y) = belowY d :<- YY y
That's enough to let me define:
upF z = muUp (MuZpr {aboveMu = zAboveY z, hereMu = In (up VY (zZipY z))})
That is, we go up by first reassembling the node where the element is, turning an element-zipper into a subnode-zipper, then zooming all the way out, as above.
Next, I say
downF = yOnDown []
to go down starting with the empty stack, and define the helper function which goes down repeatedly from below any stack:
yOnDown :: Diff2 b => [D b X (Mu b y) y] -> Mu b y -> Mu b (ZF (Mu b) y)
yOnDown dXs (In b) = In (contextualize dXs (down b))
Now, down b only takes us inside the node. The zippers we need must also carry the node's context. That's what contextualise does:
contextualize :: (Bifunctor c, Diff2 b) =>
[D b X (Mu b y) y] ->
c (Z b X (Mu b y) y) (Z b Y (Mu b y) y) ->
c (Mu b (ZF (Mu b) y)) (ZF (Mu b) y)
contextualize dXs = bimap
(\ (dX :<- XX t) -> yOnDown (dX : dXs) t)
(\ (dY :<- YY y) -> MuCx {aboveY = dXs, belowY = dY} :<-: y)
For every Y-position, we must give an element-zipper, so it is good we know the whole context dXs back to the root, as well as the dY which describes how the element sits in its node. For every X-position, there is a further subtree to explore, so we grow the stack and keep going!
That leaves only the business of shifting focus. We might stay put, or go down from where we are, or go up, or go up and then down some other path. Here goes.
aroundF z#(MuCx {aboveY = dXs, belowY = dY} :<-: _) = MuCx
{ aboveY = yOnUp dXs (In (up VY (zZipY z)))
, belowY = contextualize dXs (cxZ $ around VY (zZipY z))
} :<-: z
As ever, the existing element is replaced by its entire zipper. For the belowY part, we look where else we can go in the existing node: we will find either alternative element Y-positions or further X-subnodes to explore, so we contextualise them. For the aboveY part, we must work our way back up the stack of X-derivatives after reassembling the node we were visiting.
yOnUp :: Diff2 b => [D b X (Mu b y) y] -> Mu b y ->
[D b X (Mu b (ZF (Mu b) y)) (ZF (Mu b) y)]
yOnUp [] t = []
yOnUp (dX : dXs) (t :: Mu b y)
= contextualize dXs (cxZ $ around VX (dX :<- XX t))
: yOnUp dXs (In (up VX (dX :<- XX t)))
At each step of the way, we can either turn somewhere else that's around, or keep going up.
And that's it! I haven't given a formal proof of the laws, but it looks to me as if the operations carefully maintain the context correctly as they crawl the structure.
What have we learned?
Differentiability induces notions of thing-in-its-context, inducing a comonadic structure where extract gives you the thing and duplicate explores the context looking for other things to contextualise. If we have the appropriate differential structure for nodes, we can develop differential structure for whole trees.
Oh, and treating each individual arity of type constructor separately is blatantly horrendous. The better way is to work with functors between indexed sets
f :: (i -> *) -> (o -> *)
where we make o different sorts of structure storing i different sorts of element. These are closed under the Jacobian construction
J f :: (i -> *) -> ((o, i) -> *)
where each of the resulting (o, i)-structures is a partial derivative, telling you how to make an i-element-hole in an o-structure. But that's dependently typed fun, for another time.
The Comonad instance for zippers is not
instance (Diff t, Diff (D t)) => Comonad (Zipper t) where
extract = here
duplicate = fmap outOf . inTo
where outOf and inTo come from the Diff instance for Zipper t itself. The above instance violates the Comonad law fmap extract . duplicate == id. Instead it behaves like:
fmap extract . duplicate == \z -> fmap (const (here z)) z
Diff (Zipper t)
The Diff instance for Zipper is provided by identifying them as products and reusing the code for products (below).
-- Zippers are themselves products
toZipper :: (D t :*: Identity) a -> Zipper t a
toZipper (d :*: (Identity h)) = Zipper d h
fromZipper :: Zipper t a -> (D t :*: Identity) a
fromZipper (Zipper d h) = (d :*: (Identity h))
Given an isomorphism between data types, and an isomorphism between their derivatives, we can reuse one type's inTo and outOf for the other.
inToFor' :: (Diff r) =>
(forall a. r a -> t a) ->
(forall a. t a -> r a) ->
(forall a. D r a -> D t a) ->
(forall a. D t a -> D r a) ->
t a -> t (Zipper t a)
inToFor' to from toD fromD = to . fmap (onDiff toD) . inTo . from
outOfFor' :: (Diff r) =>
(forall a. r a -> t a) ->
(forall a. t a -> r a) ->
(forall a. D r a -> D t a) ->
(forall a. D t a -> D r a) ->
Zipper t a -> t a
outOfFor' to from toD fromD = to . outOf . onDiff fromD
For types that are just newTypes for an existing Diff instance, their derivatives are the same type. If we tell the type checker about that type equality D r ~ D t, we can take advantage of that instead of providing an isomorphism for the derivatives.
inToFor :: (Diff r, D r ~ D t) =>
(forall a. r a -> t a) ->
(forall a. t a -> r a) ->
t a -> t (Zipper t a)
inToFor to from = inToFor' to from id id
outOfFor :: (Diff r, D r ~ D t) =>
(forall a. r a -> t a) ->
(forall a. t a -> r a) ->
Zipper t a -> t a
outOfFor to from = outOfFor' to from id id
Equipped with these tools, we can reuse the Diff instance for products to implement Diff (Zipper t)
-- This requires undecidable instances, due to the need to take D (D t)
instance (Diff t, Diff (D t)) => Diff (Zipper t) where
type D (Zipper t) = D ((D t) :*: Identity)
-- inTo :: t a -> t (Zipper t a)
-- inTo :: Zipper t a -> Zipper t (Zipper (Zipper t) a)
inTo = inToFor toZipper fromZipper
-- outOf :: Zipper t a -> t a
-- outOf :: Zipper (Zipper t) a -> Zipper t a
outOf = outOfFor toZipper fromZipper
Boilerplate
In order to actually use the code presented here, we need some language extensions, imports, and a restatement of the proposed problem.
{-# LANGUAGE StandaloneDeriving #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE TypeOperators #-}
{-# LANGUAGE UndecidableInstances #-}
{-# LANGUAGE RankNTypes #-}
import Control.Monad.Identity
import Data.Proxy
import Control.Comonad
data Zipper t a = Zipper { diff :: D t a, here :: a }
onDiff :: (D t a -> D u a) -> Zipper t a -> Zipper u a
onDiff f (Zipper d a) = Zipper (f d) a
deriving instance Diff t => Functor (Zipper t)
deriving instance (Eq (D t a), Eq a) => Eq (Zipper t a)
deriving instance (Show (D t a), Show a) => Show (Zipper t a)
class (Functor t, Functor (D t)) => Diff t where
type D t :: * -> *
inTo :: t a -> t (Zipper t a)
outOf :: Zipper t a -> t a
Products, Sums, and Constants
The Diff (Zipper t) instance relies on implementations of Diff for products :*:, sums :+:, constants Identity, and zero Proxy.
data (:+:) a b x = InL (a x) | InR (b x)
deriving (Eq, Show)
data (:*:) a b x = a x :*: b x
deriving (Eq, Show)
infixl 7 :*:
infixl 6 :+:
deriving instance (Functor a, Functor b) => Functor (a :*: b)
instance (Functor a, Functor b) => Functor (a :+: b) where
fmap f (InL a) = InL . fmap f $ a
fmap f (InR b) = InR . fmap f $ b
instance (Diff a, Diff b) => Diff (a :*: b) where
type D (a :*: b) = D a :*: b :+: a :*: D b
inTo (a :*: b) =
(fmap (onDiff (InL . (:*: b))) . inTo) a :*:
(fmap (onDiff (InR . (a :*:))) . inTo) b
outOf (Zipper (InL (a :*: b)) x) = (:*: b) . outOf . Zipper a $ x
outOf (Zipper (InR (a :*: b)) x) = (a :*:) . outOf . Zipper b $ x
instance (Diff a, Diff b) => Diff (a :+: b) where
type D (a :+: b) = D a :+: D b
inTo (InL a) = InL . fmap (onDiff InL) . inTo $ a
inTo (InR b) = InR . fmap (onDiff InR) . inTo $ b
outOf (Zipper (InL a) x) = InL . outOf . Zipper a $ x
outOf (Zipper (InR a) x) = InR . outOf . Zipper a $ x
instance Diff (Identity) where
type D (Identity) = Proxy
inTo = Identity . (Zipper Proxy) . runIdentity
outOf = Identity . here
instance Diff (Proxy) where
type D (Proxy) = Proxy
inTo = const Proxy
outOf = const Proxy
Bin Example
I posed the Bin example as an isomorphism to a sum of products. We need not only its derivative but its second derivative as well
newtype Bin a = Bin {unBin :: (Bin :*: Identity :*: Bin :+: Identity) a}
deriving (Functor, Eq, Show)
newtype DBin a = DBin {unDBin :: D (Bin :*: Identity :*: Bin :+: Identity) a}
deriving (Functor, Eq, Show)
newtype DDBin a = DDBin {unDDBin :: D (D (Bin :*: Identity :*: Bin :+: Identity)) a}
deriving (Functor, Eq, Show)
instance Diff Bin where
type D Bin = DBin
inTo = inToFor' Bin unBin DBin unDBin
outOf = outOfFor' Bin unBin DBin unDBin
instance Diff DBin where
type D DBin = DDBin
inTo = inToFor' DBin unDBin DDBin unDDBin
outOf = outOfFor' DBin unDBin DDBin unDDBin
The example data from the previous answer is
aTree :: Bin Int
aTree =
(Bin . InL) (
(Bin . InL) (
(Bin . InR) (Identity 2)
:*: (Identity 1) :*:
(Bin . InR) (Identity 3)
)
:*: (Identity 0) :*:
(Bin . InR) (Identity 4)
)
Not the Comonad instance
The Bin example above provides a counter-example to fmap outOf . inTo being the correct implementation of duplicate for Zipper t. In particular, it provides a counter-example to the fmap extract . duplicate = id law:
fmap ( \z -> (fmap extract . duplicate) z == z) . inTo $ aTree
Which evaluates to (notice how it is full of Falses everywhere, any False would be enough to disprove the law)
Bin {unBin = InL ((Bin {unBin = InL ((Bin {unBin = InR (Identity False)} :*: Identity False) :*: Bin {unBin = InR (Identity False)})} :*: Identity False) :*: Bin {unBin = InR (Identity False)})}
inTo aTree is a tree with the same structure as aTree, but everywhere there was a value there is instead a zipper with the value, and the remainder of the tree with all of the original values intact. fmap (fmap extract . duplicate) . inTo $ aTree is also a tree with the same structure as aTree, but everywere there was a value there is instead a zipper with the value, and the remainder of the tree with all of the values replaced with that same value. In other words:
fmap extract . duplicate == \z -> fmap (const (here z)) z
The complete test-suite for all three Comonad laws, extract . duplicate == id, fmap extract . duplicate == id, and duplicate . duplicate == fmap duplicate . duplicate is
main = do
putStrLn "fmap (\\z -> (extract . duplicate) z == z) . inTo $ aTree"
print . fmap ( \z -> (extract . duplicate) z == z) . inTo $ aTree
putStrLn ""
putStrLn "fmap (\\z -> (fmap extract . duplicate) z == z) . inTo $ aTree"
print . fmap ( \z -> (fmap extract . duplicate) z == z) . inTo $ aTree
putStrLn ""
putStrLn "fmap (\\z -> (duplicate . duplicate) z) == (fmap duplicate . duplicate) z) . inTo $ aTree"
print . fmap ( \z -> (duplicate . duplicate) z == (fmap duplicate . duplicate) z) . inTo $ aTree
Given an infinitely differentiable Diff class:
class (Functor t, Functor (D t)) => Diff t where
type D t :: * -> *
up :: Zipper t a -> t a
down :: t a -> t (Zipper t a)
-- Require that types be infinitely differentiable
ddiff :: p t -> Dict (Diff (D t))
around can be written in terms of up and down on the Zipper's diff's derivitive, essentially as
around z#(Zipper d h) = Zipper ctx z
where
ctx = fmap (\z' -> Zipper (up z') (here z')) (down d)
The Zipper t a consists of a D t a and an a. We go down the D t a, getting a D t (Zipper (D t) a) with a zipper in every hole. Those zippers consists of a D (D t) a and the a that was in the hole. We go up each of them, getting a D t a and paring it with the a that was in the hole. A D t a and an a make a Zipper t a, giving us a D t (Zipper t a), which is the context needed for a Zipper t (Zipper t a).
The Comonad instance is then simply
instance Diff t => Comonad (Zipper t) where
extract = here
duplicate = around
Capturing the derivative's Diff dictionary requires some additional plumbing, which can be done with Data.Constraint or in terms of the method presented in a related answer
around :: Diff t => Zipper t a -> Zipper t (Zipper t a)
around z = Zipper (withDict d' (fmap (\z' -> Zipper (up z') (here z')) (down (diff z)))) z
where
d' = ddiff . p' $ z
p' :: Zipper t x -> Proxy t
p' = const Proxy

Resources