Lifting a (Lens) Traversal to Fix - haskell

I've got the following code. As you can see the last function is undefined.
{-# LANGUAGE TemplateHaskell, DeriveFunctor, DeriveTraversable #-}
module Example where
import Control.Lens
import Data.Functor.Foldable
data PathComponent d a = Directions d | Alt [a] deriving (Show, Functor, Foldable, Traversable)
makePrisms ''PathComponent
newtype Path d a = Path [PathComponent d a] deriving (Show, Functor, Foldable, Traversable)
directions :: Traversal (Path a p) (Path b p) a b
directions a2fb (Path l) = Path <$> traverse f l where
f (Directions d) = Directions <$> a2fb d
f (Alt p) = (pure . Alt) p
directions' :: Traversal (Fix (Path a)) (Fix (Path b)) a b
directions' = undefined
What I ultimately want to do is map every a to a b recursively in the structure. I was hoping I could do this by lifting directions but I seem to be held back by a) the fact the function declares p in the s and t positions and also b) the fact that _Wrapping is an Iso' not a Iso. Is there an elegant way to fix this?

In directions we need to traverse the p with a2fb too. Since p is a parameter, we can take its traversal as a parameter. In addition, the f you've defined is really a traversal of PathComponent, that we can pull out as well.
First, the traversal of PathComponent a p, which is parameterized by a traversal of p (and generalized so the source and target types can vary):
data PathComponent d a = Directions d | Alt [a] deriving (Show, Functor, Foldable, Traversable)
{- Morally
traversePC ::
Traversal pa pb a b ->
Traversal (PathComponent a pa) (PathComponent b pb) a b
But the following type is both simpler (rank 1) and more general.
-}
traversePC ::
Applicative m =>
LensLike m pa pb a b ->
LensLike m (PathComponent a pa) (PathComponent b pb) a b
traversePC _tp f (Directions d) = Directions <$> f d
traversePC tp f (Alt pas) = Alt <$> (traverse . tp) f pas
In the Directions case, we transform the a to a b directly.
In the Alt case, we have a list of pa, so we compose a traversal of that list (traverse) with the parameter traversal (tp).
The traversal of Path passes tp to traversePC.
newtype Path d a = Path [PathComponent d a] deriving (Show, Functor, Foldable, Traversable)
{- Same idea about the types.
directions :: Traversal pa pb a b -> Traversal (Path a pa) (Path b pb) a b
-}
directions ::
Applicative m =>
LensLike m pa pb a b ->
LensLike m (Path a pa) (Path b pb) a b
directions tp f (Path l) = Path <$> (traverse . traversePC tp) f l
And finally, to traverse Fix (Path a), this unpacks to h :: Path a (Fix (Path a)), and we pass down the toplevel traversal for Fix (Path a) recursively.
directions' :: Traversal (Fix (Path a)) (Fix (Path b)) a b
directions' f (Fix h) = Fix <$> directions directions' f h
In fact, there is a general pattern here for any Fix. If you have a functor f (here Path a), and there is a traversal of f x parameterized by a traversal of x, then you can tie a knot to get a traversal traverseFix' of Fix f, applying the parameterized traversal to traverseFix' itself.
{-
traverseFix ::
(forall x y. Traversal x y a b -> Traversal (f x) (g y) a b) ->
Traversal (Fix f) (Fix g) a b
-}
traverseFix ::
Functor m =>
(forall x y. LensLike m x y a b -> LensLike m (f x) (g y) a b) ->
LensLike m (Fix f) (Fix g) a b
traverseFix traverseF = traverseFix' where
traverseFix' f (Fix h) = Fix <$> traverseF traverseFix' f h
So we can redefine directions' as follows:
directions'' :: Traversal (Fix (Path a)) (Fix (Path b)) a b
directions'' = traverseFix directions
Full gist

Related

How to define an instance of Data.Foldable.Constrained?

I've successfully defined Category, Functor, Semigroup, Monoid constrained. Now I'm stuck with Data.Foldable.Constrained. More precisely, I seem to have correctly defined the unconstrained functions fldl and fldMp, but I can't get them to be accepted as Foldable.Constrained instances.
My definition attempt is inserted as a comment.
{-# LANGUAGE OverloadedLists, GADTs, TypeFamilies, ConstraintKinds,
FlexibleInstances, MultiParamTypeClasses, StandaloneDeriving, TypeApplications #-}
import Prelude ()
import Control.Category.Constrained.Prelude
import qualified Control.Category.Hask as Hask
-- import Data.Constraint.Trivial
import Data.Foldable.Constrained
import Data.Map as M
import Data.Set as S
import qualified Data.Foldable as FL
main :: IO ()
main = print $ fmap (constrained #Ord (+1))
$ RMS ([(1,[11,21]),(2,[31,41])])
data RelationMS a b where
IdRMS :: RelationMS a a
RMS :: Map a (Set b) -> RelationMS a b
deriving instance (Show a, Show b) => Show (RelationMS a b)
instance Category RelationMS where
type Object RelationMS o = Ord o
id = IdRMS
RMS mp2 . RMS mp1
| M.null mp2 || M.null mp1 = RMS M.empty
| otherwise = RMS $ M.foldrWithKey
(\k s acc -> M.insert k (S.foldr (\x acc2 -> case M.lookup x mp2 of
Nothing -> acc2
Just s2 -> S.union s2 acc2
) S.empty s
) acc
) M.empty mp1
(°) :: (Object k a, Object k b, Object k c, Category k) => k a b -> k b c -> k a c
r1 ° r2 = r2 . r1
instance (Ord a, Ord b) => Semigroup (RelationMS a b) where
RMS r1 <> RMS r2 = RMS $ M.foldrWithKey (\k s acc -> M.insertWith S.union k s acc) r1 r2
instance (Ord a, Ord b) => Monoid (RelationMS a b) where
mempty = RMS M.empty
mappend = (<>)
instance Functor (RelationMS a) (ConstrainedCategory (->) Ord) Hask where
fmap (ConstrainedMorphism f) = ConstrainedMorphism $
\(RMS r) -> RMS $ M.map (S.map f) r
fldl :: (a -> Set b -> a) -> a -> RelationMS k b -> a
fldl f acc (RMS r) = M.foldl f acc r
fldMp :: Monoid b1 => (Set b2 -> b1) -> RelationMS k b2 -> b1
fldMp m (RMS r) = M.foldr (mappend . m) mempty r
-- instance Foldable (RelationMS a) (ConstrainedCategory (->) Ord) Hask where
-- foldMap f (RMS r)
-- | M.null r = mempty
-- | otherwise = FL.foldMap f r
-- ffoldl f = uncurry $ M.foldl (curry f)
You need FL.foldMap (FL.foldMap f) r in your definition so that you fold over the Map and the Set.
However, there's a critical error in your Functor instance; your fmap is partial. It's not defined on IdRMS.
I suggest using -Wall to have the compiler warn you about such issues.
The problem comes down to you need to be able to represent relations with finite and infinite domains. IdRMS :: RelationRMS a a can already be used to represent some relations of infinite domain, it isn't powerful enough to represent a relation like fmap (\x -> [x]) IdRMS.
One approach is to use Map a (Set b) for finite relations and a -> Set b for infinite relations.
data Relation a b where
Fin :: Map a (Set b) -> Relation a b
Inf :: (a -> Set b) -> Relation a b
image :: Relation a b -> a -> Set b
image (Fin f) a = M.findWithDefault (S.empty) a f
image (Inf f) a = f a
This changes the category instance accordingly:
instance Category Relation where
type Object Relation a = Ord a
id = Inf S.singleton
f . Fin g = Fin $ M.mapMaybe (nonEmptySet . concatMapSet (image f)) g
f . Inf g = Inf $ concatMapSet (image f) . g
nonEmptySet :: Set a -> Maybe (Set a)
nonEmptySet | S.null s = Nothing
| otherwise = Just s
concatMapSet :: Ord b => (a -> Set b) -> Set a -> Set b
concatMapSet f = S.unions . fmap f . S.toList
And now you can define a total Functor instance:
instance Functor (Relation a) (Ord ⊢ (->)) Hask where
fmap (ConstrainedMorphism f) = ConstrainedMorphism $ \case -- using {-# LANGUAGE LambdaCase #-}
Fin g -> Fin $ fmap (S.map f) g
Inf g -> Inf $ fmap (S.map f) g
But a new issue raises its head when defining the Foldable instance:
instance Foldable (Relation a) (Ord ⊢ (->)) Hask where
foldMap (ConstrainedMorphism f) = ConstrainedMorphism $ \case
Fin g -> Prelude.foldMap (Prelude.foldMap f) g
Inf g -> -- uh oh...problem!
We have f :: b -> m and g :: a -> Set b. Monoid m gives us append :: m -> m -> m, and we know Ord a, but in order to generate all the b values in the image of the relation, we need all the possible a values!
One way you could try to salvage this is to use Bounded and Enum as additional constraints on the relation's domain. Then you could try to enumerate all the possible a values with [minBound..maxBound] (this may not be list every value for all types; I'm not sure if that's a law for Bounded and Enum).
instance (Enum a, Bounded a) => Foldable (Relation a) (Ord ⊢ (->)) Hask where
foldMap (ConstrainedMorphism f) = ConstrainedMorphism $ \case
Fin g -> Prelude.foldMap (Prelude.foldMap f) g
Inf g -> Prelude.foldMap (Prelude.foldMap f . g) [minBound .. maxBound]

Shortcut fusion for triemaps

This problem arose when attempting to fuse away intermediate triemaps in Haskell.
Consider the trie for Peano natural numbers:
data Nat = Zero | Succ Nat
data ExpoNat a = ExpoNat (Maybe a) (ExpoNat a)
| NoExpoNat
We can easily define a fold on ExpoNat (it is essentially a list) and use foldr/build (a.k.a. finally tagless) to fuse away intermediate occurrencess of ExpoNat:
{-# NOINLINE fold #-}
fold :: (Maybe a -> b -> b) -> b -> ExpoNat a -> b
fold f z (ExpoNat x y) = f x (fold f z y)
fold f z NoExpoNat = z
{-# NOINLINE build #-}
build :: (forall b. (Maybe a -> b -> b) -> b -> b) -> ExpoNat a
build f = f ExpoNat NoExpoNat
{-# RULES "fold/build" forall f n (g :: forall b. (Maybe a -> b -> b) -> b -> b). fold f n (build g) = g f n #-}
As an example, we take match and appl from "Is there a way to generalize this TrieMap code?" and compose them such that ExpoNat is fused away. (Note that we must "strengthen the induction hypothesis" in appl.)
{-# INLINE match #-}
match :: Nat -> ExpoNat ()
match n = build $ \f z ->
let go Zero = f (Just ()) z
go (Succ n) = f Nothing (go n)
in go n
{-# INLINE appl #-}
appl :: ExpoNat a -> (Nat -> Maybe a)
appl
= fold (\f z -> \n ->
case n of Zero -> f
Succ n' -> z n')
(\n -> Nothing)
applmatch :: Nat -> Nat -> Maybe ()
applmatch x = appl (match x)
The fusion can be verified by inspecting Core with -ddump-simpl.
Now we would like to do the same for Tree.
data Tree = Leaf | Node Tree Tree
data TreeMap a
= TreeMap {
tm_leaf :: Maybe a,
tm_node :: TreeMap (TreeMap a)
}
| EmptyTreeMap
We are in trouble: TreeMap is a non-regular data type, and so it is not obvious how to write its corresponding fold/build pair.
Haskell Programming with Nested Types: A Principled Approach seems to have the answer (see the Bush type) but 4:30 AM seems to be too late for me to get it working. How is one supposed to write hfmap? Have there been further developments since?
A similar variant of this question has been asked in What's the type of a catamorphism (fold) for non-regular recursive types?
I worked on it some more and I now have working fusion, without using the generic gadgets from the paper.
{-# LANGUAGE RankNTypes #-}
{-# LANGUAGE TypeOperators #-}
{-# LANGUAGE KindSignatures #-}
{-# LANGUAGE PolyKinds #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE DeriveFoldable #-}
{-# LANGUAGE DeriveTraversable #-}
module Tree where
data Tree = Leaf | Node Tree Tree
deriving (Show)
data ExpoTree a = ExpoTree (Maybe a) (ExpoTree (ExpoTree a))
| NoExpoTree
deriving (Show, Functor)
I derived most of the specialized types by taking the generic construction and then inlining type definitions until I bottomed out. I've kept the generic construction in here for ease of comparison.
data HExpoTree f a = HExpoTree (Maybe a) (f (f a))
| HNoExpoTree
type g ~> h = forall a. g a -> h a
class HFunctor f where
ffmap :: Functor g => (a -> b) -> f g a -> f g b
hfmap :: (Functor g, Functor h) => (g ~> h) -> (f g ~> f h)
instance HFunctor HExpoTree where
ffmap f HNoExpoTree = HNoExpoTree
ffmap f (HExpoTree x y) = HExpoTree (fmap f x) (fmap (fmap f) y)
hfmap f HNoExpoTree = HNoExpoTree
hfmap f (HExpoTree x y) = HExpoTree x (f (fmap f y))
type Alg f g = f g ~> g
newtype Mu f a = In { unIn :: f (Mu f) a }
instance HFunctor f => Functor (Mu f) where
fmap f (In r) = In (ffmap f r)
hfold :: (HFunctor f, Functor g) => Alg f g -> (Mu f ~> g)
hfold m (In u) = m (hfmap (hfold m) u)
An Alg ExpoTreeH g can be decomposed into a product of two natural transformations:
type ExpoTreeAlg g = forall a. Maybe a -> g (g a) -> g a
type NoExpoTreeAlg g = forall a. g a
{-# NOINLINE fold #-}
fold :: Functor g => ExpoTreeAlg g -> NoExpoTreeAlg g -> ExpoTree a -> g a
fold f z NoExpoTree = z
fold f z (ExpoTree x y) = f x (fold f z (fmap (fold f z) y))
The natural transformation here c ~> x is very interesting, and turns out to be quite necessary. Here's the build translation:
hbuild :: HFunctor f => (forall x. Alg f x -> (c ~> x)) -> (c ~> Mu f)
hbuild g = g In
newtype I :: (* -> *) where
I :: x -> I x
deriving (Show, Eq, Functor, Foldable, Traversable)
-- Needs to be a newtype, otherwise RULE firer gets bamboozled
newtype ExpoTreeBuilder c = ETP {runETP :: (forall x. Functor x
=> (forall a. Maybe a -> x (x a) -> x a)
-> (forall a. x a)
-> (forall a. c a -> x a)
)}
{-# NOINLINE build #-}
build :: ExpoTreeBuilder c -> forall a. c a -> ExpoTree a
build g = runETP g ExpoTree NoExpoTree
The newtype for the builder function is needed, because GHC 8.0 doesn't know how to fire the RULE without.
Now, the shortcut fusion rule:
{-# RULES "ExpoTree fold/build"
forall (g :: ExpoTreeBuilder c) c (f :: ExpoTreeAlg g) (n :: NoExpoTreeAlg g).
fold f n (build g c) = runETP g f n c #-}
Implementation of 'match' with 'build':
{-# INLINE match #-}
match :: Tree -> ExpoTree ()
match n = build (match_mk n) (I ())
where
match_mk :: Tree -> ExpoTreeBuilder I
match_mk Leaf = ETP $ \ f z (I c) -> f (Just c) z
match_mk (Node x y) = ETP $ \ f z c ->
-- NB: This fmap is bad for performance
f Nothing (fmap (const (runETP (match_mk y) f z c)) (runETP (match_mk x) f z c))
Implementation of 'appl' with 'fold' (we need to define a custom functor to define the return type.)
newtype PFunTree a = PFunTree { runPFunTree :: Tree -> Maybe a }
deriving (Functor)
{-# INLINE appl #-}
appl :: ExpoTree a -> PFunTree a
appl = fold appl_expoTree appl_noExpoTree
where
appl_expoTree :: ExpoTreeAlg PFunTree
appl_expoTree = \z f -> PFunTree $ \n ->
case n of Leaf -> z
Node n1 n2 -> runPFunTree f n1 >>= flip runPFunTree n2
appl_noExpoTree :: NoExpoTreeAlg PFunTree
appl_noExpoTree = PFunTree $ \n -> Nothing
Putting it all together:
applmatch :: Tree -> Tree -> Maybe ()
applmatch x = runPFunTree (appl (match x))
We can once again inspect the core with -ddump-simpl. Unfortunately, while we have successfully fused away the TrieMap data structure, we are left with suboptimal code due to the fmap in match. Eliminating this inefficiency is left to future work.
The paper appears to draw a parallel between ExpoNat a as a recursive Type and Tree as a recursive type constructor (Type -> Type).
newtype Fix f = Fix (f ( Fix f))
newtype HFix h a = HFix (h (HFix h) a)
Fix f represents the least fixed point of the endofunctor on the category of types and functions, f :: Type -> Type; HFix h represents the least fixed point of the endofunctor h on a category of functors and natural transformations, h :: (Type -> Type) -> (Type -> Type).
-- x ~ Fix (ExpoNatF a) ~ ExpoNat
data ExpoNatF a x = ExpoNatF (Maybe a) x | NoExpoNatF
fmap :: (x -> y) -> ExpoNatF a x -> ExpoNatF a y
fmap f (ExpoNatF u v) = ExpoNatF u (f v)
fmap _ NoExpoNatF = NoExpoNatF
-- f ~ HFix TreeMapH ~ TreeMap
data TreeMapH f a = TreeMapH (Maybe a) (f (f a)) | EmptyTreeMapH
hfmap :: (f ~> g) -> (TreeMapH f ~> TreeMapH g)
hfmap f (TreeMapH u v) = TreeMapH u ((fmap . fmap) f v)
hfmap _ EmptyTreeMapH = EmptyTreeMapH
-- (~>) is the type of natural transformations
type f ~> g = forall a. f a -> g a
Endofunctors give rise to algebras.
type Alg f a = f a -> a
type HAlg h f = h f ~> f
fold, or cata maps any algebra to a morphism (function|natural transformation).
cata :: Alg f a -> Fix f -> a
hcata :: HAlg h f -> (HFix h ~> h)
build constructs a value from its Church encoding.
type Church f = forall a. Alg f a -> a
type HChurch h = forall f. HAlg h f ~> f
build :: Church f -> Fix f
hbuild :: HChurch h -> HFix h a
-- The paper actually has a slightly different type for Church encodings, derived from the categorical view, but I'm pretty sure they're equivalent
build/fold fusion is summarized by one equation.
cata alg ( build f) = f alg
hcata alg (hbuild f) = f alg

Is there a way to generalize this TrieMap code?

Below is a simple Haskell program which computes equalities on trees:
import Control.Monad
import Control.Applicative
import Data.Maybe
data Tree = Leaf | Node Tree Tree
eqTree :: Tree -> Tree -> Maybe ()
eqTree Leaf Leaf = return ()
eqTree (Node l1 r1) (Node l2 r2) = eqTree l1 l2 >> eqTree r1 r2
eqTree _ _ = empty
Suppose you have an association list of trees [(Tree, a)], and you'd like to find the entry for a given tree. (One can think of this as a simplified version of the type class instance lookup problem.) Naively, we would have to do O(n*s) work, where n is the number of trees, and s is the size of each tree.
We can do better if we use a trie map to represent our association list:
(>.>) = flip (.)
data TreeMap a
= TreeMap {
tm_leaf :: Maybe a,
tm_node :: TreeMap (TreeMap a)
}
lookupTreeMap :: Tree -> TreeMap a -> Maybe a
lookupTreeMap Leaf = tm_leaf
lookupTreeMap (Node l r) = tm_node >.> lookupTreeMap l >=> lookupTreeMap r
Our lookup now only takes O(s). This algorithm is a strict generalization of the previous one, since we can test for equality by creating a singleton TreeMap () and then seeing if we get back Just (). But for practical reasons, we'd prefer not to do this, since it involves building up a TreeMap and then immediately tearing it down.
Is there a way to generalize the two pieces of code above into a new function that can operate on both Tree and TreeMap? There seems to be some similarity in how the code is structured, but it is not obvious how to abstract the differences away.
Edit: I remembered a very helpful fact about logarithms and derivatives which I discovered whilst disgustingly hung over on a friend's sofa. Sadly, that friend (the late great Kostas Tourlas) is no longer with us, but I commemorate him by being disgustingly hung over on a different friend's sofa.
Let's remind ourselves about tries. (Lots of my mates were working on these structures in the early noughties: Ralf Hinze, Thorsten Altenkirch and Peter Hancock spring instantly to mind in that regard.) What's really going on is that we're computing the exponential of a type t, remembering that t -> x is a way of writing x ^ t.
That is, we expect to equip a type t with a functor Expo t such that Expo t x represents t -> x. We should further expect Expo t to be applicative (zippily). Edit: Hancock calls such functors "Naperian", because they have logarithms, and they're applicative in the same way as functions, with pure being the K combinator and <*> being S. It is immediate that Expo t () must be isomorphic with (), with const (pure ()) and const () doing the (not much) work.
class Applicative (Expo t) => EXPO t where
type Expo t :: * -> *
appl :: Expo t x -> (t -> x) -- trie lookup
abst :: (t -> x) -> Expo t x -- trie construction
Another way of putting it is that t is the logarithm of Expo t.
(I nearly forgot: fans of calculus should check that t is isomorphic to ∂ (Expo t) (). This isomorphism might actually be rather useful. Edit: it's extremely useful, and we shall add it to EXPO later.)
We'll need some functor kit stuff. The identity functor is zippiy applicative...
data I :: (* -> *) where
I :: x -> I x
deriving (Show, Eq, Functor, Foldable, Traversable)
instance Applicative I where
pure x = I x
I f <*> I s = I (f s)
...and its logarithm is the unit type
instance EXPO () where
type Expo () = I
appl (I x) () = x
abst f = I (f ())
Products of zippy applicatives are zippily applicative...
data (:*:) :: (* -> *) -> (* -> *) -> (* -> *) where
(:*:) :: f x -> g x -> (f :*: g) x
deriving (Show, Eq, Functor, Foldable, Traversable)
instance (Applicative p, Applicative q) => Applicative (p :*: q) where
pure x = pure x :*: pure x
(pf :*: qf) <*> (ps :*: qs) = (pf <*> ps) :*: (qf <*> qs)
...and their logarithms are sums.
instance (EXPO s, EXPO t) => EXPO (Either s t) where
type Expo (Either s t) = Expo s :*: Expo t
appl (sf :*: tf) (Left s) = appl sf s
appl (sf :*: tf) (Right t) = appl tf t
abst f = abst (f . Left) :*: abst (f . Right)
Compositions of zippy applicatives are zippily applicative...
data (:<:) :: (* -> *) -> (* -> *) -> (* -> *) where
C :: f (g x) -> (f :<: g) x
deriving (Show, Eq, Functor, Foldable, Traversable)
instance (Applicative p, Applicative q) => Applicative (p :<: q) where
pure x = C (pure (pure x))
C pqf <*> C pqs = C (pure (<*>) <*> pqf <*> pqs)
and their logarithms are products.
instance (EXPO s, EXPO t) => EXPO (s, t) where
type Expo (s, t) = Expo s :<: Expo t
appl (C stf) (s, t) = appl (appl stf s) t
abst f = C (abst $ \ s -> abst $ \ t -> f (s, t))
If we switch on enough stuff, we may now write
newtype Tree = Tree (Either () (Tree, Tree))
deriving (Show, Eq)
pattern Leaf = Tree (Left ())
pattern Node l r = Tree (Right (l, r))
newtype ExpoTree x = ExpoTree (Expo (Either () (Tree, Tree)) x)
deriving (Show, Eq, Functor, Applicative)
instance EXPO Tree where
type Expo Tree = ExpoTree
appl (ExpoTree f) (Tree t) = appl f t
abst f = ExpoTree (abst (f . Tree))
The TreeMap a type in the question, being
data TreeMap a
= TreeMap {
tm_leaf :: Maybe a,
tm_node :: TreeMap (TreeMap a)
}
is exactly Expo Tree (Maybe a), with lookupTreeMap being flip appl.
Now, given that Tree and Tree -> x are rather different things, it strikes me as odd to want code to work "on both". The tree equality test is a special case of the lookup only in that the tree equality test is any old function which acts on a tree. There is a coincidence coincidence, however: to test equality, we must turn each tree into own self-recognizer. Edit: that's exactly what the log-diff iso
does.
The structure which gives rise to an equality test is some notion of matching. Like this:
class Matching a b where
type Matched a b :: *
matched :: Matched a b -> (a, b)
match :: a -> b -> Maybe (Matched a b)
That is, we expect Matched a b to represent somehow a pair of an a and a b which match. We should be able to extract the pair (forgetting that they match), and we should be able to take any pair and try to match them.
Unsurprisingly, we can do this for the unit type, quite successfully.
instance Matching () () where
type Matched () () = ()
matched () = ((), ())
match () () = Just ()
For products, we work componentwise, with component mismatch being the only danger.
instance (Matching s s', Matching t t') => Matching (s, t) (s', t') where
type Matched (s, t) (s', t') = (Matched s s', Matched t t')
matched (ss', tt') = ((s, t), (s', t')) where
(s, s') = matched ss'
(t, t') = matched tt'
match (s, t) (s', t') = (,) <$> match s s' <*> match t t'
Sums offer some chance of mismatch.
instance (Matching s s', Matching t t') =>
Matching (Either s t) (Either s' t') where
type Matched (Either s t) (Either s' t')
= Either (Matched s s') (Matched t t')
matched (Left ss') = (Left s, Left s') where (s, s') = matched ss'
matched (Right tt') = (Right t, Right t') where (t, t') = matched tt'
match (Left s) (Left s') = Left <$> match s s'
match (Right t) (Right t') = Right <$> match t t'
match _ _ = Nothing
Amusingly, we can obtain an equality test for trees now as easily as
instance Matching Tree Tree where
type Matched Tree Tree = Tree
matched t = (t, t)
match (Tree t1) (Tree t2) = Tree <$> match t1 t2
(Incidentally, the Functor subclass that captures a notion of matching, being
class HalfZippable f where -- "half zip" comes from Roland Backhouse
halfZip :: (f a, f b) -> Maybe (f (a, b))
is sadly neglected. Morally, for each such f, we should have
Matched (f a) (f b) = f (Matched a b)
A fun exercise is to show that if (Traversable f, HalfZippable f), then the free monad on f has a first-order unification algorithm.)
I suppose we can build "singleton association lists" like this:
mapOne :: forall a. (Tree, a) -> Expo Tree (Maybe a)
mapOne (t, a) = abst f where
f :: Tree -> Maybe a
f u = pure a <* match t u
And we could try combining them with this gadget, exploiting the zippiness of all the Expo ts...
instance Monoid x => Monoid (ExpoTree x) where
mempty = pure mempty
mappend t u = mappend <$> t <*> u
...but, yet again, the utter stupidity of the Monoid instance for Maybe x continues to frustrate clean design.
We can at least manage
instance Alternative m => Alternative (ExpoTree :<: m) where
empty = C (pure empty)
C f <|> C g = C ((<|>) <$> f <*> g)
An amusing exercise is to fuse abst with match, and perhaps that's what the question is really driving at. Let's refactor Matching.
class EXPO b => Matching a b where
type Matched a b :: *
matched :: Matched a b -> (a, b)
match' :: a -> Proxy b -> Expo b (Maybe (Matched a b))
data Proxy x = Poxy -- I'm not on GHC 8 yet, and Simon needs a hand here
For (), what's new is
instance Matching () () where
-- skip old stuff
match' () (Poxy :: Proxy ()) = I (Just ())
For sums, we need to tag successful matches, and fill in the unsuccessful parts with a magnificently Glaswegian pure Nothing.
instance (Matching s s', Matching t t') =>
Matching (Either s t) (Either s' t') where
-- skip old stuff
match' (Left s) (Poxy :: Proxy (Either s' t')) =
((Left <$>) <$> match' s (Poxy :: Proxy s')) :*: pure Nothing
match' (Right t) (Poxy :: Proxy (Either s' t')) =
pure Nothing :*: ((Right <$>) <$> match' t (Poxy :: Proxy t'))
For pairs, we need to build matching in sequence, dropping out early if the
first component fails.
instance (Matching s s', Matching t t') => Matching (s, t) (s', t') where
-- skip old stuff
match' (s, t) (Poxy :: Proxy (s', t'))
= C (more <$> match' s (Poxy :: Proxy s')) where
more Nothing = pure Nothing
more (Just s) = ((,) s <$>) <$> match' t (Poxy :: Proxy t')
So we can see that there is a connection between a constructor and the trie for its matcher.
Homework: fuse abst with match', effectively tabulating the entire process.
Edit: writing match', we parked each sub-matcher in the position of the trie corresponding to the sub-structure. And when you think of things in particular positions, you should think of zippers and differential calculus. Let me remind you.
We'll need functorial constants and coproducts to manage choice of "where the hole is".
data K :: * -> (* -> *) where
K :: a -> K a x
deriving (Show, Eq, Functor, Foldable, Traversable)
data (:+:) :: (* -> *) -> (* -> *) -> (* -> *) where
Inl :: f x -> (f :+: g) x
Inr :: g x -> (f :+: g) x
deriving (Show, Eq, Functor, Foldable, Traversable)
And now we may define
class (Functor f, Functor (D f)) => Differentiable f where
type D f :: (* -> *)
plug :: (D f :*: I) x -> f x
-- there should be other methods, but plug will do for now
The usual laws of calculus apply, with composition giving a spatial interpretation to the chain rule.
instance Differentiable (K a) where
type D (K a) = K Void
plug (K bad :*: I x) = K (absurd bad)
instance Differentiable I where
type D I = K ()
plug (K () :*: I x) = I x
instance (Differentiable f, Differentiable g) => Differentiable (f :+: g) where
type D (f :+: g) = D f :+: D g
plug (Inl f' :*: I x) = Inl (plug (f' :*: I x))
plug (Inr g' :*: I x) = Inr (plug (g' :*: I x))
instance (Differentiable f, Differentiable g) => Differentiable (f :*: g) where
type D (f :*: g) = (D f :*: g) :+: (f :*: D g)
plug (Inl (f' :*: g) :*: I x) = plug (f' :*: I x) :*: g
plug (Inr (f :*: g') :*: I x) = f :*: plug (g' :*: I x)
instance (Differentiable f, Differentiable g) => Differentiable (f :<: g) where
type D (f :<: g) = (D f :<: g) :*: D g
plug ((C f'g :*: g') :*: I x) = C (plug (f'g :*: I (plug (g' :*: I x))))
It will not harm us to insist that Expo t is differentiable, so let us extend the EXPO class. What's a "trie with a hole"? It's a trie which is missing the output entry for exactly one of the possible inputs. And that's the key.
class (Differentiable (Expo t), Applicative (Expo t)) => EXPO t where
type Expo t :: * -> *
appl :: Expo t x -> t -> x
abst :: (t -> x) -> Expo t x
hole :: t -> D (Expo t) ()
eloh :: D (Expo t) () -> t
Now, hole and eloh will witness the isomorphism.
instance EXPO () where
type Expo () = I
-- skip old stuff
hole () = K ()
eloh (K ()) = ()
The unit case wasn't very exciting, but the sum case begins to show structure:
instance (EXPO s, EXPO t) => EXPO (Either s t) where
type Expo (Either s t) = Expo s :*: Expo t
hole (Left s) = Inl (hole s :*: pure ())
hole (Right t) = Inr (pure () :*: hole t)
eloh (Inl (f' :*: _)) = Left (eloh f')
eloh (Inr (_ :*: g')) = Right (eloh g')
See? A Left is mapped to a trie with a hole on the left; a Right is mapped to a trie with a hole on the right.
Now for products.
instance (EXPO s, EXPO t) => EXPO (s, t) where
type Expo (s, t) = Expo s :<: Expo t
hole (s, t) = C (const (pure ()) <$> hole s) :*: hole t
eloh (C f' :*: g') = (eloh (const () <$> f'), eloh g')
A trie for a pair is a right trie stuffed inside a left trie, so the hole for a particular pair is found by making a hole for the right element in the particular subtrie for the left element.
For trees, we make another wrapper.
newtype DExpoTree x = DExpoTree (D (Expo (Either () (Tree, Tree))) x)
deriving (Show, Eq, Functor)
So, how do we turn a tree into its trie recognizer? First, we grab its "everyone but me" trie, and we fill in all those outputs with False, then we plug in True for the missing entry.
matchMe :: EXPO t => t -> Expo t Bool
matchMe t = plug ((const False <$> hole t) :*: I True)
Homework hint: D f :*: I is a comonad.
Absent friends!
This is a naïve solution. The class BinaryTree describes how both Trees and TreeMaps are binary trees.
{-# LANGUAGE RankNTypes, MultiParamTypeClasses, FlexibleInstances #-}
class BinaryTree t a where
leaf :: MonadPlus m => t a -> m a
node :: MonadPlus m => (forall r. BinaryTree t r => t r -> m r) ->
(forall r. BinaryTree t r => t r -> m r) ->
t a -> m a
The awkward BinaryTree t r constraints and the multi-parameter type class are only necessary because Trees don't hold an a at their leaves to return. If your real Tree is richer this wrinkle will probably disappear.
lookupTreeMap can be written in terms of BinaryTree instead of in terms of Tree or TreeMap
lookupTreeMap' :: BinaryTree t r => Tree -> t r -> Maybe r
lookupTreeMap' Leaf = leaf
lookupTreeMap' (Node l r) = node (lookupTreeMap' l) (lookupTreeMap' r)
TreeMap has a straightforward BinaryTree instance.
instance BinaryTree TreeMap a where
leaf = maybe empty return . tm_leaf
node kl kr = tm_node >.> kl >=> kr
Tree can't have a BinaryTree instance because it has the wrong kind. That's easily fixed with a newtype:
newtype Tree2 a = Tree2 {unTree2 :: Tree}
tree2 :: Tree -> Tree2 ()
tree2 = Tree2
Tree2 can be equiped with a BinaryTree instance.
instance BinaryTree Tree2 () where
leaf (Tree2 Leaf) = return ()
leaf _ = empty
node kl kr (Tree2 (Node l r)) = kl (tree2 l) >> kr (tree2 r)
node _ _ _ = empty
I don't think the above is a particularly elegant solution, or that it will necessarily simplify anything, unless the implementation of lookupTreeMap is non-trivial. As an incremental improvement, I'd recommend refactoring Tree into the base functor
data TreeF a = Leaf | Node a a
data Tree = Tree (TreeF Tree)
We can split the problem into matching the base functor against itself,
-- This looks like a genaralized version of Applicative that can fail
untreeF :: MonadPlus m => TreeF (a -> m b) -> TreeF a -> m (TreeF b)
untreeF Leaf Leaf = return Leaf
untreeF (Node kl kr) (Node l r) = Node <$> kl l <*> kr r
untreeF _ _ = empty
matching the base functor against Trees,
untree :: MonadPlus m => TreeF (Tree -> m ()) -> Tree -> m ()
untree tf (Tree tf2) = untreeF tf tf2 >> return ()
and matching the base functor against TreeMap.
-- A reader for things that read from a TreeMap to avoid impredicative types.
data TMR m = TMR {runtmr :: forall r. TreeMap r -> m r}
-- This work is unavoidable. Something has to say how a TreeMap is related to Trees
untreemap :: MonadPlus m => TreeF (TMR m) -> TMR m
untreemap Leaf = TMR $ maybe empty return . tm_leaf
untreemap (Node kl kr) = TMR $ tm_node >.> runtmr kl >=> runtmr kr
Like in the first example, we define traversing the tree only once.
-- This looks suspiciously like a traversal / transform
lookupTreeMap' :: (TreeF a -> a) -> Tree -> a
lookupTreeMap' un = go
where
go (Tree Leaf) = un Leaf
go (Tree (Node l r)) = un $ Node (go l) (go r)
-- If the traversal is trivial these can be replaced by
-- go (Tree tf) = un $ go <$> tf
The operations specialized for Tree and TreeMap can be obtained from the single definition of the traversal.
eqTree :: Tree -> Tree -> Maybe ()
eqTree = lookupTreeMap' untree
lookupTreeMap :: MonadPlus m => Tree -> TreeMap a -> m a
lookupTreeMap = runtmr . lookupTreeMap' untreemap

How do I give a Functor instance to a datatype built for general recursion schemes?

I have a recursive datatype which has a Functor instance:
data Expr1 a
= Val1 a
| Add1 (Expr1 a) (Expr1 a)
deriving (Eq, Show, Functor)
Now, I'm interested in modifying this datatype to support general recursion schemes, as they are described in this tutorial and this Hackage package. I managed to get the catamorphism to work:
newtype Fix f = Fix {unFix :: f (Fix f)}
data ExprF a r
= Val a
| Add r r
deriving (Eq, Show, Functor)
type Expr2 a = Fix (ExprF a)
cata :: Functor f => (f a -> a) -> Fix f -> a
cata f = f . fmap (cata f) . unFix
eval :: Expr2 Int -> Int
eval = cata $ \case
Val n -> n
Add x y -> x + y
main :: IO ()
main =
print $ eval
(Fix (Add (Fix (Val 1)) (Fix (Val 2))))
But now I can't figure out how to give Expr2 the same functor instance that the original Expr had. It seems there is a kind mismatch when trying to define the functor instance:
instance Functor (Fix (ExprF a)) where
fmap = undefined
Kind mis-match
The first argument of `Functor' should have kind `* -> *',
but `Fix (ExprF a)' has kind `*'
In the instance declaration for `Functor (Fix (ExprF a))'
How do I write a Functor instance for Expr2?
I thought about wrapping Expr2 in a newtype with newtype Expr2 a = Expr2 (Fix (ExprF a)) but then this newtype needs to be unwrapped to be passed to cata, which I don't like very much. I also don't know if it would be possible to automatically derive the Expr2 functor instance like I did with Expr1.
This is an old sore for me. The crucial point is that your ExprF is functorial in both its parameters. So if we had
class Bifunctor b where
bimap :: (x1 -> y1) -> (x2 -> y2) -> b x1 x2 -> b y1 y2
then you could define (or imagine a machine defining for you)
instance Bifunctor ExprF where
bimap k1 k2 (Val a) = Val (k1 a)
bimap k1 k2 (Add x y) = Add (k2 x) (k2 y)
and now you can have
newtype Fix2 b a = MkFix2 (b a (Fix2 b a))
accompanied by
map1cata2 :: Bifunctor b => (a -> a') -> (b a' t -> t) -> Fix2 b a -> t
map1cata2 e f (MkFix2 bar) = f (bimap e (map1cata2 e f) bar)
which in turn gives you that when you take a fixpoint in one of the parameters, what's left is still functorial in the other
instance Bifunctor b => Functor (Fix2 b) where
fmap k = map1cata2 k MkFix2
and you sort of get what you wanted. But your Bifunctor instance isn't going to be built by magic. And it's a bit annoying that you need a different fixpoint operator and a whole new kind of functor. The trouble is that you now have two sorts of substructure: "values" and "subexpressions".
And here's the turn. There is a notion of functor which is closed under fixpoints. Turn on the kitchen sink (especially DataKinds) and
type s :-> t = forall x. s x -> t x
class FunctorIx (f :: (i -> *) -> (o -> *)) where
mapIx :: (s :-> t) -> f s :-> f t
Note that "elements" come in a kind indexed over i and "structures" in a kind indexed over some other o. We take i-preserving functions on elements to o preserving functions on structures. Crucially, i and o can be different.
The magic words are "1, 2, 4, 8, time to exponentiate!". A type of kind * can easily be turned into a trivially indexed GADT of kind () -> *. And two types can be rolled together to make a GADT of kind Either () () -> *. That means we can roll both sorts of substructure together. In general, we have a kind of type level either.
data Case :: (a -> *) -> (b -> *) -> Either a b -> * where
CL :: f a -> Case f g (Left a)
CR :: g b -> Case f g (Right b)
equipped with its notion of "map"
mapCase :: (f :-> f') -> (g :-> g') -> Case f g :-> Case f' g'
mapCase ff gg (CL fx) = CL (ff fx)
mapCase ff gg (CR gx) = CR (gg gx)
So we can refunctor our bifactors as Either-indexed FunctorIx instances.
And now we can take the fixpoint of any node structure f which has places for either elements p or subnodes. It's just the same deal we had above.
newtype FixIx (f :: (Either i o -> *) -> (o -> *))
(p :: i -> *)
(b :: o)
= MkFixIx (f (Case p (FixIx f p)) b)
mapCata :: forall f p q t. FunctorIx f =>
(p :-> q) -> (f (Case q t) :-> t) -> FixIx f p :-> t
mapCata e f (MkFixIx node) = f (mapIx (mapCase e (mapCata e f)) node)
But now, we get the fact that FunctorIx is closed under FixIx.
instance FunctorIx f => FunctorIx (FixIx f) where
mapIx f = mapCata f MkFixIx
Functors on indexed sets (with the extra freedom to vary the index) can be very precise and very powerful. They enjoy many more convenient closure properties than Functors do. I don't suppose they'll catch on.
I wonder if you might be better off using the Free type:
data Free f a
= Pure a
| Wrap (f (Free f a))
deriving Functor
data ExprF r
= Add r r
deriving Functor
This has the added benefit that there are quite a few libraries that work on free monads already, so maybe they'll save you some work.
Nothing wrong with pigworker's answer, but maybe you can use a simpler one as a stepping-stone:
{-# LANGUAGE DeriveFunctor, ScopedTypeVariables #-}
import Prelude hiding (map)
newtype Fix f = Fix { unFix :: f (Fix f) }
-- This is the catamorphism function you hopefully know and love
-- already. Generalizes 'foldr'.
cata :: Functor f => (f r -> r) -> Fix f -> r
cata phi = phi . fmap (cata phi) . unFix
-- The 'Bifunctor' class. You can find this in Hackage, so if you
-- want to use this just use it from there.
--
-- Minimal definition: either 'bimap' or both 'first' and 'second'.
class Bifunctor f where
bimap :: (a -> c) -> (b -> d) -> f a b -> f c d
bimap f g = first f . second g
first :: (a -> c) -> f a b -> f c b
first f = bimap f id
second :: (b -> d) -> f a b -> f a d
second g = bimap id g
-- The generic map function. I wrote this out with
-- ScopedTypeVariables to make it easier to read...
map :: forall f a b. (Functor (f a), Bifunctor f) =>
(a -> b) -> Fix (f a) -> Fix (f b)
map f = cata phi
where phi :: f a (Fix (f b)) -> Fix (f b)
phi = Fix . first f
Now your expression language works like this:
-- This is the base (bi)functor for your expression type.
data ExprF a r = Val a
| Add r r
deriving (Eq, Show, Functor)
instance Bifunctor ExprF where
bimap f g (Val a) = Val (f a)
bimap f g (Add l r) = Add (g l) (g r)
newtype Expr a = Expr (Fix (ExprF a))
instance Functor Expr where
fmap f (Expr exprF) = Expr (map f exprF)
EDIT: Here's a link to the bifunctors package in Hackage.
The keyword type is used only as a synonymous of an existing type, maybe this is what you are looking for
newtype Expr2 a r = In { out :: (ExprF a r)} deriving Functor

How to construct an Applicative instance with constraints (similarly to constructing Monad instances using ContT)

This question deals with constructing a proper Monad instance from something that is a monad, but only under certain constraints - for example Set. The trick is to wrap it into ContT, which defers the constraints to wrapping/unwrapping its values.
Now I'd like to do the same with Applicatives. In particular, I have an Applicative instance whose pure has a type-class constraint. Is there a similar trick how to construct a valid Applicative instance?
(Is there "the mother of all applicative functors" just as there is for monads?)
What may be the most consistent way available is starting from Category, where it's quite natural to have a restriction to objects: Object!
class Category k where
type Object k :: * -> Constraint
id :: Object k a => k a a
(.) :: (Object k a, Object k b, Object k c)
=> k b c -> k a b -> k a c
Then we define functors similar to how Edward does it
class (Category r, Category t) => Functor f r t | f r -> t, f t -> r where
fmap :: (Object r a, Object t (f a), Object r b, Object t (f b))
=> r a b -> t (f a) (f b)
All of this works nicely and is implemented in the constrained-categories library, which – shame on me! – still isn't on Hackage.
Applicative is unfortunately a bit less straightforward to do. Mathematically, these are monoidal functors, so we first need monoidal categories. categories has that class, but it doesn't work with the constraint-based version because our objects are always anything of kind * with a constraint. So what I did is make up a Curry class, which kind of approximates this.
Then, we can do Monoidal functors:
class (Functor f r t, Curry r, Curry t) => Monoidal f r t where
pure :: (Object r a, Object t (f a)) => a `t` f a
fzipWith :: (PairObject r a b, Object r c, PairObject t (f a) (f b), Object t (f c))
=> r (a, b) c -> t (f a, f b) (f c)
This is actually equivalent to Applicative when we have proper closed cartesian categories. In the constrained-categories version, the signatures unfortunately look very horrible:
(<*>) :: ( Applicative f r t
, MorphObject r a b, Object r (r a b)
, MorphObject t (f a) (f b), Object t (t (f a) (f b)), Object t (f (r a b))
, PairObject r (r a b) a, PairObject t (f (r a b)) (f a)
, Object r a, Object r b, Object t (f a), Object t (f b))
=> f (r a b) `t` t (f a) (f b)
Still, it actually works – for the unconstrained case, duh! I haven't yet found a convenient way to use it with nontrivial constraints.
But again, Applicative is equivalent to Monoidal, and that can be used as demonstrated in the Set example.
I'm not sure the notion of "restricted applicative" is unique, as different presentations are not isomorphic. That said here is one and something at least somewhat along the lines of Codensity. The idea is to have a "free functor" together with a unit
{-# LANGUAGE TypeFamilies, ConstraintKinds, ExistentialQuantification #-}
import GHC.Prim (Constraint)
import Control.Applicative
class RFunctor f where
type C f :: * -> Constraint
rfmap :: C f b => (a -> b) -> f a -> f b
class RFunctor f => RApplicative f where
rpure :: C f a => a -> f a
rzip :: f a -> f b -> f (a,b)
data UAp f a
= Pure a
| forall b. Embed (f b) (b -> a)
toUAp :: C f a => f a -> UAp f a
toUAp x = Embed x id
fromUAp :: (RApplicative f, C f a) => UAp f a -> f a
fromUAp (Pure x) = rpure x
fromUAp (Embed x f) = rfmap f x
zipUAp :: RApplicative f => UAp f a -> UAp f b -> UAp f (a,b)
zipUAp (Pure a) (Pure b) = Pure (a,b)
zipUAp (Pure a) (Embed b f) = Embed b (\x -> (a,f x))
zipUAp (Embed a f) (Pure b) = Embed a (\x -> (f x,b))
zipUAp (Embed a f) (Embed b g) = Embed (rzip a b) (\(x,y) -> (f x,g y))
instance Functor (UAp f) where
fmap f (Pure a) = Pure (f a)
fmap f (Embed a g) = Embed a (f . g)
instance RApplicative f => Applicative (UAp f) where
pure = Pure
af <*> ax = fmap (\(f,x) -> f x) $ zipUAp af ax
EDIT: Fixed some bugs. That is what happens when you don't compile before posting.
Because every Monad is a Functor, you can use the same ContT trick.
pure becomes return
fmap f x becomes x >>= (return . f)

Resources