Using a paramorphism inside of an apomorphism - haskell

I'm trying to use paramorphisms and apomorhisms (in haskell):
-- Fixed point of a Functor
newtype Fix f = In (f (Fix f))
deriving instance (Eq (f (Fix f))) => Eq (Fix f)
deriving instance (Ord (f (Fix f))) => Ord (Fix f)
deriving instance (Show (f (Fix f))) => Show (Fix f)
out :: Fix f -> f (Fix f)
out (In f) = f
type RAlgebra f a = f (Fix f, a) -> a
para :: (Functor f) => RAlgebra f a -> Fix f -> a
para rAlg = rAlg . fmap fanout . out
where fanout t = (t, para rAlg t)
-- Apomorphism
type RCoalgebra f a = a -> f (Either (Fix f) a)
apo :: Functor f => RCoalgebra f a -> a -> Fix f
apo rCoalg = In . fmap fanin . rCoalg
where fanin = either id (apo rCoalg)
to define the following recursive function:
fun concat3 (v,E,r) = add(r,v)
| concat3 (v,l,E) = add(l,v)
| concat3 (v, l as T(v1,n1,l1,r1), r as T(v2,n2,l2,r2)) =
if weight*n1 < n2 then T’(v2,concat3(v,l,l2),r2)
else if weight*n2 < n1 then T’(v1,l1,concat3(v,r1,r))
else N(v,l,r)
It takes two binary trees and an element that is greater than the values in the left tree and less than the values in the right tree and combines them into one binary tree :: value -> tree1 -> tree2 -> tree3
I have defined the add function (which inserts an element into a binary tree) as a paramorphism like so:
add :: Ord a => a -> RAlgebra (ATreeF a) (ATreeF' a)
add elem EmptyATreeF = In (NodeATreeF elem 1 (In EmptyATreeF) (In EmptyATreeF))
add elem (NodeATreeF cur _ (prevLeft, left) (prevRight, right))
| elem < cur = bATreeConstruct cur left prevRight
| elem > cur = bATreeConstruct cur prevLeft right
| otherwise = nATreeConstruct cur prevLeft prevRight
When I try to write concat3 as an apomorphism:
concat3 :: Ord a => a -> RCoalgebra (ATreeF a) (ATreeF' a, ATreeF' a)
concat3 elem (In EmptyATreeF, In (NodeATreeF cur2 size2 left2 right2)) =
out para (insertATreeFSetPAlg elem) (In (NodeATreeF cur2 size2 (Left left2) (Left right2)))
...
Because the next level of the apomorphism has not been evaluated yet, I get a type error from the compiler.
Couldn't match type: Fix (ATreeF a)
with: Either (Fix (ATreeF a)) (ATreeF' a, ATreeF' a)
Expected: ATreeF a (Either (Fix (ATreeF a)) (ATreeF' a, ATreeF' a))
Actual: ATreeF a (Fix (ATreeF a))
Is there another approach I can take?

Some missing context to explain the solution is that this is from an implementation of weight-balanced trees, specifically Adams's variant (which happens to be the data structure behind Data.Set and Data.Map.)
A problem when writing concat3 as a coalgebra is that it is not corecursive, strictly speaking, because the recursive calls of concat3 are under a smart constructor T', i.e., a function (which does some non-trivial rebalancing).
A solution is to introduce an intermediate representation which delays the evaluation of that smart constructor.
-- | Tree with delayed rebalancing operations T', or Id when no rebalancing is needed
data TreeF1 a x = E1 | T' a x x | Id (Tree a)
deriving Functor
So we can write a coalgebra of TreeF1:
concatAlg :: Ord a => a -> RCoalgebra (TreeF1 a) (Tree a, Tree a)
concatAlg v (In E, r) = Id (add r v)
concatAlg v (l, In E) = Id (add l v)
concatAlg v (l#(In (T v1 n1 l1 r1)), r#(In (T v2 n2 l2 r2))) =
if balance * n1 < n2 then T' v2 (Right (l, l2)) (Left (In (Id r2)))
else if balance * n2 < n1 then T' v1 (Left (In (Id l1))) (Right (r1, r))
else Id (_N v1 l r)
{- Reference implementation for comparison:
fun concat3 (v,E,r) = add(r,v)
| concat3 (v,l,E) = add(l,v)
| concat3 (v, l as T(v1,n1,l1,r1), r as T(v2,n2,l2,r2)) =
if weight*n1 < n2 then T’(v2,concat3(v,l,l2),r2)
else if weight*n2 < n1 then T’(v1,l1,concat3(v,r1,r))
else N(v,l,r)
-}
And we can convert a Fix (TreeF1 a) to Fix (Tree a) via a catamorphism, finally executing those delayed applications of rebalancing T'.
_T :: a -> Tree a -> Tree a -> Tree a
_T = error "todo: rebalance"
type Algebra f a = f a -> a
-- do the rebalancing on T' v l r nodes
rebalanceAlg :: Algebra (TreeF1 a) (Tree a)
rebalanceAlg E1 = In E
rebalanceAlg (T' v l r) = _T v l r
rebalanceAlg (Id t) = t
So concat3 is a composition of cata and apo using the above algebras:
concat3 :: Ord a => a -> Tree a -> Tree a -> Tree a
concat3 v l r = (cata rebalanceAlg . apo (concatAlg v)) (l, r)
You can fuse cata and apo so that, after some elementary compiler optimizations, the intermediate tree does not get allocated:
-- fusion of (cata _ . apo _)
cataApo :: Functor f => Algebra f b -> RCoalgebra f a -> a -> b
cataApo alg coalg = go
where
go x = alg (either (cata alg) go <$> coalg x)
concat3' :: Ord a => a -> Tree a -> Tree a -> Tree a
concat3' v l r = cataApo rebalanceAlg (concatAlg v) (l, r)
Full gist: https://gist.github.com/Lysxia/281010fbe40eac9be0b135d4733c3d5a

Related

Traversing with a Biapplicative

I was thinking about unzipping operations and realized that one way to express them is by traversing in a Biapplicative functor.
import Data.Biapplicative
class Traversable2 t where
traverse2 :: Biapplicative p
=> (a -> p b c) -> t a -> p (t b) (t c)
-- Note: sequence2 :: [(a,b)] -> ([a], [b])
sequence2 :: (Traversable2 t, Biapplicative p)
=> t (p b c) -> p (t b) (t c)
sequence2 = traverse2 id
instance Traversable2 [] where
traverse2 _ [] = bipure [] []
traverse2 f (x : xs) = bimap (:) (:) (f x) <<*>> traverse2 f xs
It smells to me as though every instance of Traversable can be transformed mechanically into an instance of Traversable2. But I haven't yet found a way to actually implement traverse2 using traverse, short of converting to and from lists or perhaps playing extremely dirty tricks with unsafeCoerce. Is there a nice way to do this?
Further evidence that anything Traversable is Traversable2:
class (Functor t, Foldable t) => Traversable2 t where
traverse2 :: Biapplicative p
=> (a -> p b c) -> t a -> p (t b) (t c)
default traverse2 ::
(Biapplicative p, Generic1 t, GTraversable2 (Rep1 t))
=> (a -> p b c) -> t a -> p (t b) (t c)
traverse2 f xs = bimap to1 to1 $ gtraverse2 f (from1 xs)
class GTraversable2 r where
gtraverse2 :: Biapplicative p
=> (a -> p b c) -> r a -> p (r b) (r c)
instance GTraversable2 V1 where
gtraverse2 _ x = bipure (case x of) (case x of)
instance GTraversable2 U1 where
gtraverse2 _ _ = bipure U1 U1
instance GTraversable2 t => GTraversable2 (M1 i c t) where
gtraverse2 f (M1 t) = bimap M1 M1 $ gtraverse2 f t
instance (GTraversable2 t, GTraversable2 u) => GTraversable2 (t :*: u) where
gtraverse2 f (t :*: u) = bimap (:*:) (:*:) (gtraverse2 f t) <<*>> gtraverse2 f u
instance (GTraversable2 t, GTraversable2 u) => GTraversable2 (t :+: u) where
gtraverse2 f (L1 t) = bimap L1 L1 (gtraverse2 f t)
gtraverse2 f (R1 t) = bimap R1 R1 (gtraverse2 f t)
instance GTraversable2 (K1 i c) where
gtraverse2 f (K1 x) = bipure (K1 x) (K1 x)
instance (Traversable2 f, GTraversable2 g) => GTraversable2 (f :.: g) where
gtraverse2 f (Comp1 x) = bimap Comp1 Comp1 $ traverse2 (gtraverse2 f) x
instance Traversable2 t => GTraversable2 (Rec1 t) where
gtraverse2 f (Rec1 xs) = bimap Rec1 Rec1 $ traverse2 f xs
instance GTraversable2 Par1 where
gtraverse2 f (Par1 p) = bimap Par1 Par1 (f p)
I think I might have something that fits your bill. (Edit: It doesn't, see comments.) You can define newtypes over p () c and p b () and make them Functor instances.
Implementation
Here's your class again with default definitions. I went the route of implementing sequence2 in terms of sequenceA because it seemed simpler.
class Functor t => Traversable2 t where
{-# MINIMAL traverse2 | sequence2 #-}
traverse2 :: Biapplicative p => (a -> p b c) -> t a -> p (t b) (t c)
traverse2 f = sequence2 . fmap f
sequence2 :: Biapplicative p => t (p b c) -> p (t b) (t c)
sequence2 = traverse2 id
Now, the "right part" of the Biapplicative is
newtype R p c = R { runR :: p () c }
instance Bifunctor p => Functor (R p) where
fmap f (R x) = R $ bimap id f x
instance Biapplicative p => Applicative (R p) where
pure x = R (bipure () x)
R f <*> R x =
let f' = biliftA2 const (flip const) (bipure id ()) f
in R $ f' <<*>> x
mkR :: Biapplicative p => p b c -> R p c
mkR = R . biliftA2 const (flip const) (bipure () ())
sequenceR :: (Traversable t, Biapplicative p) => t (p b c) -> p () (t c)
sequenceR = runR . sequenceA . fmap mkR
with the "left part" much the same. The full code is in this gist.
Now we can make p (t b) () and p () (t c) and reassemble them into p (t b) (t c).
instance (Functor t, Traversable t) => Traversable2 t where
sequence2 x = biliftA2 const (flip const) (sequenceL x) (sequenceR x)
I needed to turn on FlexibleInstances and UndecidableInstances for that instance declaration. Also, somehow ghc wanted a Functor constaint.
Testing
I verified with your instance for [] that it gives the same results:
main :: IO ()
main = do
let xs = [(x, ord x - 97) | x <- ['a'..'g']]
print xs
print (sequence2 xs)
print (sequence2' xs)
traverse2' :: Biapplicative p => (a -> p b c) -> [a] -> p [b] [c]
traverse2' _ [] = bipure [] []
traverse2' f (x : xs) = bimap (:) (:) (f x) <<*>> traverse2 f xs
sequence2' :: Biapplicative p => [p b c] -> p [b] [c]
sequence2' = traverse2' id
outputs
[('a',0),('b',1),('c',2),('d',3),('e',4),('f',5),('g',6)]
("abcdefg",[0,1,2,3,4,5,6])
("abcdefg",[0,1,2,3,4,5,6])
This was a fun exercise!
The following seems to do the trick, exploiting “only” undefined. Possibly the traversable laws guarantee that this is ok, but I've not attempted to prove it.
{-# LANGUAGE GADTs, KindSignatures, TupleSections #-}
import Data.Biapplicative
import Data.Traversable
data Bimock :: (* -> * -> *) -> * -> * where
Bimock :: p a b -> Bimock p (a,b)
Bimfmap :: ((a,b) -> c) -> p a b -> Bimock p c
Bimpure :: a -> Bimock p a
Bimapp :: Bimock p ((a,b) -> c) -> p a b -> Bimock p c
instance Functor (Bimock p) where
fmap f (Bimock p) = Bimfmap f p
fmap f (Bimfmap g p) = Bimfmap (f . g) p
fmap f (Bimpure x) = Bimpure (f x)
fmap f (Bimapp gs xs) = Bimapp (fmap (f .) gs) xs
instance Biapplicative p => Applicative (Bimock p) where
pure = Bimpure
Bimpure f<*>xs = fmap f xs
fs<*>Bimpure x = fmap ($x) fs
fs<*>Bimock p = Bimapp fs p
Bimfmap g h<*>Bimfmap i xs = Bimfmap (\(~(a₁,a₂),~(b₁,b₂)) -> g (a₁,b₁) $ i (a₂, b₂))
$ bimap (,) (,) h<<*>>xs
Bimapp g h<*>xs = fmap uncurry g <*> ((,)<$>Bimock h<*>xs)
runBimock :: Biapplicative p => Bimock p (a,b) -> p a b
runBimock (Bimock p) = p
runBimock (Bimfmap f p) = bimap (fst . f . (,undefined)) (snd . f . (undefined,)) p
runBimock (Bimpure (a,b)) = bipure a b
runBimock (Bimapp (Bimpure f) xs) = runBimock . fmap f $ Bimock xs
runBimock (Bimapp (Bimfmap h g) xs)
= runBimock . fmap (\(~(a₂,a₁),~(b₂,b₁)) -> h (a₂,b₂) (a₁,b₁))
. Bimock $ bimap (,) (,) g<<*>>xs
runBimock (Bimapp (Bimapp h g) xs)
= runBimock . (fmap (\θ (~(a₂,a₁),~(b₂,b₁)) -> θ (a₂,b₂) (a₁,b₁)) h<*>)
. Bimock $ bimap (,) (,) g<<*>>xs
traverse2 :: (Biapplicative p, Traversable t) => (a -> p b c) -> t a -> p (t b) (t c)
traverse2 f s = runBimock . fmap (\bcs->(fmap fst bcs, fmap snd bcs)) $ traverse (Bimock . f) s
sequence2 :: (Traversable t, Biapplicative p)
=> t (p b c) -> p (t b) (t c)
sequence2 = traverse2 id
And even if this is safe, I wouldn't be surprised if it gives horrible performance, what with the irrefutable patterns and quadratic (or even exponential?) tuple-tree buildup.
A few observations short of a complete, original answer.
If you have a Biapplicative bifunctor, what you can do with it is apply it to something and separate it into a pair of bifunctors isomorphic to its two components.
data Helper w a b = Helper {
left :: w a (),
right :: w () b
}
runHelper :: forall p a b. Biapplicative p => Helper p a b -> p a b
runHelper x = biliftA2 const (flip const) (left x) (right x)
makeHelper :: (Biapplicative p)
=> p a b -> Helper p a b
makeHelper w = Helper (bimap id (const ()) w)
(bimap (const ()) id w)
type Separated w a b = (w a (), w () b)
It would be possible to combine the approaches of #nnnmmm and #leftroundabout by applying fmap (makeHelper . f) to the structure s, eliminating the need for undefined, but then you would need to make Helper or its replacement an instance of some typeclass with the useful operations that let you solve the problem.
If you have a Traversable structure, what you can do is sequenceA Applicative functors (in which case your solution will look like traverse2 f = fromHelper . sequenceA . fmap (makeHelper . f), where your Applicative instance builds a pair of t structures) or traverse it using a Functor (in which case your solution will look like traverse2 f = fromHelper . traverse (g . makeHelper . f) where ...). Either way, you need to define a Functor instance, since Applicative inherits from Functor. You might try to build your Functor from <<*>> and bipure id id, or bimap, or you might work on both separated variables in the same pass.
Unfortunately, to make the types work for the Functor instance, you have to paramaterize :: p b c to a type we would informally call :: w (b,c) where the one parameter is the Cartesian product of the two parameters of p. Haskell’s type system doesn’t seem to allow this without non-standard extensions, but #leftroundabout pulls this off ably with the Bimock class. using undefined to coerce both separated functors to have the same type.
For performance, what you want to do is make no more than one traversal, which produces an object isomorphic to p (t b) (t c) that you can then convert (similar to the Naturality law). You therefore want to implement traverse2 rather than sequence2 and define sequence2 as traverse2 id, to avoid traversing twice. If you separate variables and produce something isomorphic to (p (t b) (), p () (t c)), you can then recombine them as #mmmnnn does.
In practical use, I suspect you would want to impose some additional structure on the problem. Your question kept the components b and c of the Bifunctor completely free, but in practice they will usually be either covariant or contravariant functors that can be sequenced with biliftA2 or traversed together over a Bitraversable rather than Traversable t, or perhaps even have a Semigroup, Applicative or Monad instance.
A particularly efficient optimization would be if your p is isomorphic to a Monoid whose <> operation produces a data structure isomorphic to your t. (This works for lists and binary trees; Data.ByteString.Builder is an algebraic type that has this property.) In this case, the associativity of the operation lets you transform the structure into either a strict left fold or a lazy right fold.
This was an excellent question, and although I don’t have better code than #leftroundabout for the general case, I learned a lot from working on it.
One only mildly evil way to do this is using something like Magma from lens. This seems considerably simpler than leftaroundabout's solution, although it's not beautiful either.
data Mag a b t where
Pure :: t -> Mag a b t
Map :: (x -> t) -> Mag a b x -> Mag a b t
Ap :: Mag a b (t -> u) -> Mag a b t -> Mag a b u
One :: a -> Mag a b b
instance Functor (Mag a b) where
fmap = Map
instance Applicative (Mag a b) where
pure = Pure
(<*>) = Ap
traverse2 :: forall t a b c f. (Traversable t, Biapplicative f)
=> (a -> f b c) -> t a -> f (t b) (t c)
traverse2 f0 xs0 = go m m
where
m :: Mag a x (t x)
m = traverse One xs0
go :: forall x y. Mag a b x -> Mag a c y -> f x y
go (Pure t) (Pure u) = bipure t u
go (Map f x) (Map g y) = bimap f g (go x y)
go (Ap fs xs) (Ap gs ys) = go fs gs <<*>> go xs ys
go (One x) (One y) = f0 x
go _ _ = error "Impossible"

Is there a way to generalize this TrieMap code?

Below is a simple Haskell program which computes equalities on trees:
import Control.Monad
import Control.Applicative
import Data.Maybe
data Tree = Leaf | Node Tree Tree
eqTree :: Tree -> Tree -> Maybe ()
eqTree Leaf Leaf = return ()
eqTree (Node l1 r1) (Node l2 r2) = eqTree l1 l2 >> eqTree r1 r2
eqTree _ _ = empty
Suppose you have an association list of trees [(Tree, a)], and you'd like to find the entry for a given tree. (One can think of this as a simplified version of the type class instance lookup problem.) Naively, we would have to do O(n*s) work, where n is the number of trees, and s is the size of each tree.
We can do better if we use a trie map to represent our association list:
(>.>) = flip (.)
data TreeMap a
= TreeMap {
tm_leaf :: Maybe a,
tm_node :: TreeMap (TreeMap a)
}
lookupTreeMap :: Tree -> TreeMap a -> Maybe a
lookupTreeMap Leaf = tm_leaf
lookupTreeMap (Node l r) = tm_node >.> lookupTreeMap l >=> lookupTreeMap r
Our lookup now only takes O(s). This algorithm is a strict generalization of the previous one, since we can test for equality by creating a singleton TreeMap () and then seeing if we get back Just (). But for practical reasons, we'd prefer not to do this, since it involves building up a TreeMap and then immediately tearing it down.
Is there a way to generalize the two pieces of code above into a new function that can operate on both Tree and TreeMap? There seems to be some similarity in how the code is structured, but it is not obvious how to abstract the differences away.
Edit: I remembered a very helpful fact about logarithms and derivatives which I discovered whilst disgustingly hung over on a friend's sofa. Sadly, that friend (the late great Kostas Tourlas) is no longer with us, but I commemorate him by being disgustingly hung over on a different friend's sofa.
Let's remind ourselves about tries. (Lots of my mates were working on these structures in the early noughties: Ralf Hinze, Thorsten Altenkirch and Peter Hancock spring instantly to mind in that regard.) What's really going on is that we're computing the exponential of a type t, remembering that t -> x is a way of writing x ^ t.
That is, we expect to equip a type t with a functor Expo t such that Expo t x represents t -> x. We should further expect Expo t to be applicative (zippily). Edit: Hancock calls such functors "Naperian", because they have logarithms, and they're applicative in the same way as functions, with pure being the K combinator and <*> being S. It is immediate that Expo t () must be isomorphic with (), with const (pure ()) and const () doing the (not much) work.
class Applicative (Expo t) => EXPO t where
type Expo t :: * -> *
appl :: Expo t x -> (t -> x) -- trie lookup
abst :: (t -> x) -> Expo t x -- trie construction
Another way of putting it is that t is the logarithm of Expo t.
(I nearly forgot: fans of calculus should check that t is isomorphic to ∂ (Expo t) (). This isomorphism might actually be rather useful. Edit: it's extremely useful, and we shall add it to EXPO later.)
We'll need some functor kit stuff. The identity functor is zippiy applicative...
data I :: (* -> *) where
I :: x -> I x
deriving (Show, Eq, Functor, Foldable, Traversable)
instance Applicative I where
pure x = I x
I f <*> I s = I (f s)
...and its logarithm is the unit type
instance EXPO () where
type Expo () = I
appl (I x) () = x
abst f = I (f ())
Products of zippy applicatives are zippily applicative...
data (:*:) :: (* -> *) -> (* -> *) -> (* -> *) where
(:*:) :: f x -> g x -> (f :*: g) x
deriving (Show, Eq, Functor, Foldable, Traversable)
instance (Applicative p, Applicative q) => Applicative (p :*: q) where
pure x = pure x :*: pure x
(pf :*: qf) <*> (ps :*: qs) = (pf <*> ps) :*: (qf <*> qs)
...and their logarithms are sums.
instance (EXPO s, EXPO t) => EXPO (Either s t) where
type Expo (Either s t) = Expo s :*: Expo t
appl (sf :*: tf) (Left s) = appl sf s
appl (sf :*: tf) (Right t) = appl tf t
abst f = abst (f . Left) :*: abst (f . Right)
Compositions of zippy applicatives are zippily applicative...
data (:<:) :: (* -> *) -> (* -> *) -> (* -> *) where
C :: f (g x) -> (f :<: g) x
deriving (Show, Eq, Functor, Foldable, Traversable)
instance (Applicative p, Applicative q) => Applicative (p :<: q) where
pure x = C (pure (pure x))
C pqf <*> C pqs = C (pure (<*>) <*> pqf <*> pqs)
and their logarithms are products.
instance (EXPO s, EXPO t) => EXPO (s, t) where
type Expo (s, t) = Expo s :<: Expo t
appl (C stf) (s, t) = appl (appl stf s) t
abst f = C (abst $ \ s -> abst $ \ t -> f (s, t))
If we switch on enough stuff, we may now write
newtype Tree = Tree (Either () (Tree, Tree))
deriving (Show, Eq)
pattern Leaf = Tree (Left ())
pattern Node l r = Tree (Right (l, r))
newtype ExpoTree x = ExpoTree (Expo (Either () (Tree, Tree)) x)
deriving (Show, Eq, Functor, Applicative)
instance EXPO Tree where
type Expo Tree = ExpoTree
appl (ExpoTree f) (Tree t) = appl f t
abst f = ExpoTree (abst (f . Tree))
The TreeMap a type in the question, being
data TreeMap a
= TreeMap {
tm_leaf :: Maybe a,
tm_node :: TreeMap (TreeMap a)
}
is exactly Expo Tree (Maybe a), with lookupTreeMap being flip appl.
Now, given that Tree and Tree -> x are rather different things, it strikes me as odd to want code to work "on both". The tree equality test is a special case of the lookup only in that the tree equality test is any old function which acts on a tree. There is a coincidence coincidence, however: to test equality, we must turn each tree into own self-recognizer. Edit: that's exactly what the log-diff iso
does.
The structure which gives rise to an equality test is some notion of matching. Like this:
class Matching a b where
type Matched a b :: *
matched :: Matched a b -> (a, b)
match :: a -> b -> Maybe (Matched a b)
That is, we expect Matched a b to represent somehow a pair of an a and a b which match. We should be able to extract the pair (forgetting that they match), and we should be able to take any pair and try to match them.
Unsurprisingly, we can do this for the unit type, quite successfully.
instance Matching () () where
type Matched () () = ()
matched () = ((), ())
match () () = Just ()
For products, we work componentwise, with component mismatch being the only danger.
instance (Matching s s', Matching t t') => Matching (s, t) (s', t') where
type Matched (s, t) (s', t') = (Matched s s', Matched t t')
matched (ss', tt') = ((s, t), (s', t')) where
(s, s') = matched ss'
(t, t') = matched tt'
match (s, t) (s', t') = (,) <$> match s s' <*> match t t'
Sums offer some chance of mismatch.
instance (Matching s s', Matching t t') =>
Matching (Either s t) (Either s' t') where
type Matched (Either s t) (Either s' t')
= Either (Matched s s') (Matched t t')
matched (Left ss') = (Left s, Left s') where (s, s') = matched ss'
matched (Right tt') = (Right t, Right t') where (t, t') = matched tt'
match (Left s) (Left s') = Left <$> match s s'
match (Right t) (Right t') = Right <$> match t t'
match _ _ = Nothing
Amusingly, we can obtain an equality test for trees now as easily as
instance Matching Tree Tree where
type Matched Tree Tree = Tree
matched t = (t, t)
match (Tree t1) (Tree t2) = Tree <$> match t1 t2
(Incidentally, the Functor subclass that captures a notion of matching, being
class HalfZippable f where -- "half zip" comes from Roland Backhouse
halfZip :: (f a, f b) -> Maybe (f (a, b))
is sadly neglected. Morally, for each such f, we should have
Matched (f a) (f b) = f (Matched a b)
A fun exercise is to show that if (Traversable f, HalfZippable f), then the free monad on f has a first-order unification algorithm.)
I suppose we can build "singleton association lists" like this:
mapOne :: forall a. (Tree, a) -> Expo Tree (Maybe a)
mapOne (t, a) = abst f where
f :: Tree -> Maybe a
f u = pure a <* match t u
And we could try combining them with this gadget, exploiting the zippiness of all the Expo ts...
instance Monoid x => Monoid (ExpoTree x) where
mempty = pure mempty
mappend t u = mappend <$> t <*> u
...but, yet again, the utter stupidity of the Monoid instance for Maybe x continues to frustrate clean design.
We can at least manage
instance Alternative m => Alternative (ExpoTree :<: m) where
empty = C (pure empty)
C f <|> C g = C ((<|>) <$> f <*> g)
An amusing exercise is to fuse abst with match, and perhaps that's what the question is really driving at. Let's refactor Matching.
class EXPO b => Matching a b where
type Matched a b :: *
matched :: Matched a b -> (a, b)
match' :: a -> Proxy b -> Expo b (Maybe (Matched a b))
data Proxy x = Poxy -- I'm not on GHC 8 yet, and Simon needs a hand here
For (), what's new is
instance Matching () () where
-- skip old stuff
match' () (Poxy :: Proxy ()) = I (Just ())
For sums, we need to tag successful matches, and fill in the unsuccessful parts with a magnificently Glaswegian pure Nothing.
instance (Matching s s', Matching t t') =>
Matching (Either s t) (Either s' t') where
-- skip old stuff
match' (Left s) (Poxy :: Proxy (Either s' t')) =
((Left <$>) <$> match' s (Poxy :: Proxy s')) :*: pure Nothing
match' (Right t) (Poxy :: Proxy (Either s' t')) =
pure Nothing :*: ((Right <$>) <$> match' t (Poxy :: Proxy t'))
For pairs, we need to build matching in sequence, dropping out early if the
first component fails.
instance (Matching s s', Matching t t') => Matching (s, t) (s', t') where
-- skip old stuff
match' (s, t) (Poxy :: Proxy (s', t'))
= C (more <$> match' s (Poxy :: Proxy s')) where
more Nothing = pure Nothing
more (Just s) = ((,) s <$>) <$> match' t (Poxy :: Proxy t')
So we can see that there is a connection between a constructor and the trie for its matcher.
Homework: fuse abst with match', effectively tabulating the entire process.
Edit: writing match', we parked each sub-matcher in the position of the trie corresponding to the sub-structure. And when you think of things in particular positions, you should think of zippers and differential calculus. Let me remind you.
We'll need functorial constants and coproducts to manage choice of "where the hole is".
data K :: * -> (* -> *) where
K :: a -> K a x
deriving (Show, Eq, Functor, Foldable, Traversable)
data (:+:) :: (* -> *) -> (* -> *) -> (* -> *) where
Inl :: f x -> (f :+: g) x
Inr :: g x -> (f :+: g) x
deriving (Show, Eq, Functor, Foldable, Traversable)
And now we may define
class (Functor f, Functor (D f)) => Differentiable f where
type D f :: (* -> *)
plug :: (D f :*: I) x -> f x
-- there should be other methods, but plug will do for now
The usual laws of calculus apply, with composition giving a spatial interpretation to the chain rule.
instance Differentiable (K a) where
type D (K a) = K Void
plug (K bad :*: I x) = K (absurd bad)
instance Differentiable I where
type D I = K ()
plug (K () :*: I x) = I x
instance (Differentiable f, Differentiable g) => Differentiable (f :+: g) where
type D (f :+: g) = D f :+: D g
plug (Inl f' :*: I x) = Inl (plug (f' :*: I x))
plug (Inr g' :*: I x) = Inr (plug (g' :*: I x))
instance (Differentiable f, Differentiable g) => Differentiable (f :*: g) where
type D (f :*: g) = (D f :*: g) :+: (f :*: D g)
plug (Inl (f' :*: g) :*: I x) = plug (f' :*: I x) :*: g
plug (Inr (f :*: g') :*: I x) = f :*: plug (g' :*: I x)
instance (Differentiable f, Differentiable g) => Differentiable (f :<: g) where
type D (f :<: g) = (D f :<: g) :*: D g
plug ((C f'g :*: g') :*: I x) = C (plug (f'g :*: I (plug (g' :*: I x))))
It will not harm us to insist that Expo t is differentiable, so let us extend the EXPO class. What's a "trie with a hole"? It's a trie which is missing the output entry for exactly one of the possible inputs. And that's the key.
class (Differentiable (Expo t), Applicative (Expo t)) => EXPO t where
type Expo t :: * -> *
appl :: Expo t x -> t -> x
abst :: (t -> x) -> Expo t x
hole :: t -> D (Expo t) ()
eloh :: D (Expo t) () -> t
Now, hole and eloh will witness the isomorphism.
instance EXPO () where
type Expo () = I
-- skip old stuff
hole () = K ()
eloh (K ()) = ()
The unit case wasn't very exciting, but the sum case begins to show structure:
instance (EXPO s, EXPO t) => EXPO (Either s t) where
type Expo (Either s t) = Expo s :*: Expo t
hole (Left s) = Inl (hole s :*: pure ())
hole (Right t) = Inr (pure () :*: hole t)
eloh (Inl (f' :*: _)) = Left (eloh f')
eloh (Inr (_ :*: g')) = Right (eloh g')
See? A Left is mapped to a trie with a hole on the left; a Right is mapped to a trie with a hole on the right.
Now for products.
instance (EXPO s, EXPO t) => EXPO (s, t) where
type Expo (s, t) = Expo s :<: Expo t
hole (s, t) = C (const (pure ()) <$> hole s) :*: hole t
eloh (C f' :*: g') = (eloh (const () <$> f'), eloh g')
A trie for a pair is a right trie stuffed inside a left trie, so the hole for a particular pair is found by making a hole for the right element in the particular subtrie for the left element.
For trees, we make another wrapper.
newtype DExpoTree x = DExpoTree (D (Expo (Either () (Tree, Tree))) x)
deriving (Show, Eq, Functor)
So, how do we turn a tree into its trie recognizer? First, we grab its "everyone but me" trie, and we fill in all those outputs with False, then we plug in True for the missing entry.
matchMe :: EXPO t => t -> Expo t Bool
matchMe t = plug ((const False <$> hole t) :*: I True)
Homework hint: D f :*: I is a comonad.
Absent friends!
This is a naïve solution. The class BinaryTree describes how both Trees and TreeMaps are binary trees.
{-# LANGUAGE RankNTypes, MultiParamTypeClasses, FlexibleInstances #-}
class BinaryTree t a where
leaf :: MonadPlus m => t a -> m a
node :: MonadPlus m => (forall r. BinaryTree t r => t r -> m r) ->
(forall r. BinaryTree t r => t r -> m r) ->
t a -> m a
The awkward BinaryTree t r constraints and the multi-parameter type class are only necessary because Trees don't hold an a at their leaves to return. If your real Tree is richer this wrinkle will probably disappear.
lookupTreeMap can be written in terms of BinaryTree instead of in terms of Tree or TreeMap
lookupTreeMap' :: BinaryTree t r => Tree -> t r -> Maybe r
lookupTreeMap' Leaf = leaf
lookupTreeMap' (Node l r) = node (lookupTreeMap' l) (lookupTreeMap' r)
TreeMap has a straightforward BinaryTree instance.
instance BinaryTree TreeMap a where
leaf = maybe empty return . tm_leaf
node kl kr = tm_node >.> kl >=> kr
Tree can't have a BinaryTree instance because it has the wrong kind. That's easily fixed with a newtype:
newtype Tree2 a = Tree2 {unTree2 :: Tree}
tree2 :: Tree -> Tree2 ()
tree2 = Tree2
Tree2 can be equiped with a BinaryTree instance.
instance BinaryTree Tree2 () where
leaf (Tree2 Leaf) = return ()
leaf _ = empty
node kl kr (Tree2 (Node l r)) = kl (tree2 l) >> kr (tree2 r)
node _ _ _ = empty
I don't think the above is a particularly elegant solution, or that it will necessarily simplify anything, unless the implementation of lookupTreeMap is non-trivial. As an incremental improvement, I'd recommend refactoring Tree into the base functor
data TreeF a = Leaf | Node a a
data Tree = Tree (TreeF Tree)
We can split the problem into matching the base functor against itself,
-- This looks like a genaralized version of Applicative that can fail
untreeF :: MonadPlus m => TreeF (a -> m b) -> TreeF a -> m (TreeF b)
untreeF Leaf Leaf = return Leaf
untreeF (Node kl kr) (Node l r) = Node <$> kl l <*> kr r
untreeF _ _ = empty
matching the base functor against Trees,
untree :: MonadPlus m => TreeF (Tree -> m ()) -> Tree -> m ()
untree tf (Tree tf2) = untreeF tf tf2 >> return ()
and matching the base functor against TreeMap.
-- A reader for things that read from a TreeMap to avoid impredicative types.
data TMR m = TMR {runtmr :: forall r. TreeMap r -> m r}
-- This work is unavoidable. Something has to say how a TreeMap is related to Trees
untreemap :: MonadPlus m => TreeF (TMR m) -> TMR m
untreemap Leaf = TMR $ maybe empty return . tm_leaf
untreemap (Node kl kr) = TMR $ tm_node >.> runtmr kl >=> runtmr kr
Like in the first example, we define traversing the tree only once.
-- This looks suspiciously like a traversal / transform
lookupTreeMap' :: (TreeF a -> a) -> Tree -> a
lookupTreeMap' un = go
where
go (Tree Leaf) = un Leaf
go (Tree (Node l r)) = un $ Node (go l) (go r)
-- If the traversal is trivial these can be replaced by
-- go (Tree tf) = un $ go <$> tf
The operations specialized for Tree and TreeMap can be obtained from the single definition of the traversal.
eqTree :: Tree -> Tree -> Maybe ()
eqTree = lookupTreeMap' untree
lookupTreeMap :: MonadPlus m => Tree -> TreeMap a -> m a
lookupTreeMap = runtmr . lookupTreeMap' untreemap

How to use Functor instances with Fix types

Let's say I want to have a very generic ListF data type:
{-# LANGUAGE GADTs, DataKinds #-}
data ListF :: * -> * -> * where
Nil :: List a b
Cons :: a -> b -> List a b
Now I can use this data type with Data.Fix to build an f-algebra
import qualified Data.Fix as Fx
instance Functor (ListF a :: * -> *) where
fmap f (Cons x y) = Cons x (f y)
fmap _ Nil = Nil
sumOfNums = Fx.cata f (Fx.Fix $ Cons 2 (Fx.Fix $ Cons 3 (Fx.Fix $ Cons 5 (Fx.Fix Nil))))
where
f (Cons x y) = x + y
f Nil = 0
But how I can use this very generic data type ListF to create what I consider the default Functor instance for recursive lists (mapping over each value in the list)
I guess I could use a Bifunctor (mapping over the first value, traversing the second), but I don't know how that could ever work with Data.Fix.Fix?
Quite right to construct a recursive functor by taking the fixpoint of a bifunctor, because 1 + 1 = 2. The list node structure is given as a container with 2 sorts of substructure: "elements" and "sublists".
It can be troubling that we need a whole other notion of Functor (which captures a rather specific variety of functor, despite its rather general name), to construct a Functor as a fixpoint. We can, however (as a bit of a stunt), shift to a slightly more general notion of functor which is closed under fixpoints.
type p -:> q = forall i. p i -> q i
class FunctorIx (f :: (i -> *) -> (o -> *)) where
mapIx :: (p -:> q) -> f p -:> f q
These are the functors on indexed sets, so the names are not just gratuitous homages to Goscinny and Uderzo. You can think of o as "sorts of structure" and i as "sorts of substructure". Here's an example, based on the fact that 1 + 1 = 2.
data ListF :: (Either () () -> *) -> (() -> *) where
Nil :: ListF p '()
Cons :: p (Left '()) -> p (Right '()) -> ListF p '()
instance FunctorIx ListF where
mapIx f Nil = Nil
mapIx f (Cons a b) = Cons (f a) (f b)
To exploit the choice of substructure sort, we'll need a kind of type-level case analysis. We can't get away with a type function, as
we need it to be partially applied, and that's not allowed;
we need a bit at run time to tell us which sort is present.
data Case :: (i -> *) -> (j -> *) -> (Either i j -> *) where
CaseL :: p i -> Case p q (Left i)
CaseR :: q j -> Case p q (Right j)
caseMap :: (p -:> p') -> (q -:> q') -> Case p q -:> Case p' q'
caseMap f g (CaseL p) = CaseL (f p)
caseMap f g (CaseR q) = CaseR (g q)
And now we can take the fixpoint:
data Mu :: ((Either i j -> *) -> (j -> *)) ->
((i -> *) -> (j -> *)) where
In :: f (Case p (Mu f p)) j -> Mu f p j
In each substructure position, we do a case split to see whether we should have a p-element or a Mu f p substructure. And we get its functoriality.
instance FunctorIx f => FunctorIx (Mu f) where
mapIx f (In fpr) = In (mapIx (caseMap f (mapIx f)) fpr)
To build lists from these things, we need to juggle between * and () -> *.
newtype K a i = K {unK :: a}
type List a = Mu ListF (K a) '()
pattern NilP :: List a
pattern NilP = In Nil
pattern ConsP :: a -> List a -> List a
pattern ConsP a as = In (Cons (CaseL (K a)) (CaseR as))
Now, for lists, we get
map' :: (a -> b) -> List a -> List b
map' f = mapIx (K . f . unK)
I guess I could use a Bifunctor (mapping over the first value, traversing the second), but I don't know how that could ever work with Data.Fix.Fix?
You hit the nail on the head.
The bifunctors package contains a "Fix-for-bifunctors" type which looks like this:
newtype Fix f a = In { out :: f (Fix f a) a }
Fix f is a Functor whenever f is a Bifunctor. fmap recursively fmaps f's first parameter and maps the second.
instance Bifunctor f => Functor (Fix f) where
fmap f = In . bimap (fmap f) f . out
So your List example would look like this:
data ListF r a = Nil | Cons r a
type List = Fix ListF
map :: (a -> b) -> List a -> List b
map = fmap

What is the name of this functor that uses RankNTypes?

During play around objective package, I noticed following type has interesting property.
> {-# LANGUAGE RankNTypes #-}
> data N f r = N { unN :: forall x. f x -> (x, r) }
It is a Functor.
> instance Functor (N f) where
> fmap f (N nat) = N $ fmap (fmap f) nat
> -- or, = N $ \fx -> let { (x,a) = nat fx } in (x, f a)
After few hours of google/hoogle, I gave up finding any
existing module that includes this type.
What is this type? If it is well known, what is the name? Is this useful or ignored because useless?
This is not my 100% original creation, because N was derived from Object found in objective package.
> data Object f g = Object {
> runObject :: forall x. f x -> g (x, Object f g)
> }
N f is a Functor which yields Object f Identity when Fix is applied to.
Following is a fact about this type and why I thought it is interesting.
N converts Reader to Writer, vice versa.
(Here I used (=) symbol for isomorphism between types)
N ((->) e) r
= forall x. (e -> x) -> (x, r)
= (e, r)
N ((,) d) r
= forall x. (d, x) -> (x, r)
= d -> r
N converts Store comonad to State monad, but inverse is not true.
> data Store s a = Store s (s -> a)
> type State s a = s -> (s, a)
N (Store s) r
= forall x. (s, (s -> x)) -> (x, r)
= forall x. s -> (s -> x) -> (x, r)
= s -> (s, r)
= State s r
N (State s) r
= forall x. (s -> (s, x)) -> (x, r)
= forall x. (s -> s, s -> x) -> (x, r)
= forall x. (s -> s) -> (s -> x) -> (x, r)
= (s -> s) -> (s, r) -- ???
N can't take Maybe.
N Maybe r
= forall x. Maybe x -> (x, r)
= forall x. (() -> (x, r), x -> (x, r))
= Void -- because (() -> (x, r)) can't be implemented
Following function may be fun. I couldn't do it's inverse.
> data Cofree f a = Cofree a (f (Cofree f a))
> data Free f a = Pure a | Wrap (f (Free f a))
> unfree :: Free (N f) r -> N (Cofree f) r
> unfree (Pure r) = N $ \(Cofree a _) -> (a, r)
> unfree (Wrap n_f) = N $
> \(Cofree _ f) -> let (cofree', free') = unN n_f f
> in unN (unfree free') cofree'
Entire post is literate Haskell (.lhs).
I call it a "handler" functor. Object used to be defined using the handler functor before I released objective.
Yeah, this functor is interesting -- Cofree (Handler f) has a public getter and Free (Handler f) is a mortal object. Maybe I should have shipped the handler functor...
Although it is already answered, I found another answer to the question by myself.
Type N was the value-level representation of the type class Pairing, described in following articles.
Free for DSLs, cofree for interpreters
Cofree Comonads and the Expression Problem
(Paring is called Dual here)
Pairing and N are same things
The definition of Pairing is this.
> class Pairing f g where
> pair :: (a -> b -> c) -> f a -> g b -> c
f and N f is Pairing.
> instance Pairing f (N f) where
> pair k fa nb = uncurry k $ unN nb fa
N can be represented in terms of Pairing.
> data Counterpart f r = forall g. Pairing f g => Counterpart (g r)
>
> iso1 :: N f r -> Counterpart f r
> iso1 = Counterpart
>
> iso2 :: Counterpart f r -> N f r
> iso2 (Counterpart gr) = N $ \fx -> pair (,) fx gr
There is a Free-vs-Cofree instance, that corresponds to my unfree.
Other interesting instances are also defined in the articles.
> instance Pairing f g => Pairing (Free f) (Cofree g) where
> pair = undefined -- see link above
Extending Pairing to PairingM to Object
Former article goes to extending Pairing to do computation inside a Monad m.
> class PairingM f g m | f -> g, g -> f where
> pairM :: (a -> b -> m r) -> f a -> g b -> m r
If we rewrite PairingM to a form similar to N, we get the Object again.
> -- Monad m => HandlerM' f m r ~ HandlerM f m r
> data HandlerM' f m r = forall g. PairingM f g m => HandlerM' (g r)
> data HandlerM f m r = HandleM { runHandlerM :: forall x. f x -> m (x, r) }
>
> -- Fix (HandlerM f m) ~ Object f m
> -- Free (HandlerM f m) ~ (mortal Object from f to m)

Chaining values with catamorphisms

Suppose I have definitions as follows (where cata is the catamorphism):
type Algebra f a = f a -> a
newtype Fix f = Fx (f (Fix f))
unFix :: Fix f -> f (Fix f)
unFix (Fx x) = x
cata :: Functor f => (f a -> a) -> Fix f -> a
cata alg = alg . fmap (cata alg) . unFix
I was wondering if there would be some way to modify the definition of cata so that I could chain some object such as an int through it such that I could generate unique handles for things within the alg function, i.e. "a0", "a1", "a2", ..., etc.
Edit: To make this more clear, I'd like to be able to have some function cata' such that when I have something similar to the following definitions
data IntF a
= Const Int
| Add a a
instance Functor IntF where
fmap eval (Const i) = Const i
fmap eval (x `Add` y) = eval x `Add` eval y
alg :: Int -> Algebra IntF String
alg n (Const i) = "a" ++ show n
alg n (s1 `Add` s2) = s1 ++ " && " ++ s2
eval = cata' alg
addExpr = Fx $ (Fx $ Const 5) `Add` (Fx $ Const 4)
run = eval addExpr
then run evaluates to "a0 && a1" or something similar, i.e. the two constants don't get labeled the same thing.
Just sequence them as monads.
newtype Ctr a = Ctr { runCtr :: Int -> (a, Int) } -- is State Int
instance Functor Ctr
instance Applicative Ctr
instance Monad Ctr
type MAlgebra m f a = f (m a) -> m a
fresh :: Ctr Int
fresh = Ctr (\i -> (i, i+1))
data IntF a
= Val
| Add a a
malg :: IntF (Ctr String) -> Ctr String
malg Val = (\x -> "a" ++ show x) <$> fresh
malg (Add x y) = (\a b -> a ++ " && " ++ b) <$> x <*> y
go = cata malg
As I understand, you want something like
cata' :: Functor f => (Int -> f a -> a) -> Fix f -> a
so that you can operate both on f a and it's index.
If that's true, here's a possible solution.
Associated Int
First we define a new type which will represent Int-labelled functor:
{-# LANGUAGE DeriveFunctor #-}
data IntLabel f a = IntLabel Int (f a) deriving (Functor)
-- This acts pretty much like `zip`.
labelFix :: Functor f => [Int] -> Fix f -> Fix (IntLabel f)
labelFix (x:xs) (Fx f) = Fx . IntLabel x $ fmap (labelFix xs) f
Now we can define cata' using cata and labelFix:
cata' :: Functor f => (Int -> f a -> a) -> Fix f -> a
cata' alg = cata alg' . labelFix [1..]
where
alg' (IntLabel n f) = alg n f
NOTE: unique Ints are assigned to each layer, not each functor. E.g. for Fix [] each sublist of the outermost list will be labelled with 2.
Threading effects
A different approach to the problem would be to use cata to produce monadic value:
cata :: Functor f => (f (m a) -> m a) -> Fix f -> m a
This is just a specialized version of cata. With it we can define (almost) cat' as
cata'' :: Traversable f => (Int -> f a -> a) -> Fix f -> a
cata'' alg = flip evalState [1..] . cata alg'
where
alg' f = alg <$> newLabel <*> sequenceA f
newLabel :: State [a] a
newLabel = state (\(x:xs) -> (x, xs))
Note that Traversable instance now is needed in order to switch f (m a) to m (f a).
However, you might want to use just a bit more specialized cata:
cata :: (Functor f, MonadReader Int m) => (f (m a) -> m a) -> Fix f -> m a

Resources