What benefits do I get from creating an instance of Comonad - haskell

In my application, I'm trying to implement an animation system. In this system, animations are represented as a cyclic list of frames:
data CyclicList a = CL a [a]
We can (inefficiently) advance the animation as follows:
advance :: CyclicList a -> CyclicList a
advance (CL x []) = CL x []
advance (CL x (z:zs)) = CL z (zs ++ [x])
Now, I'm pretty sure that this data type is a comonad:
instance Functor CyclicList where
fmap f (CL x xs) = CL (f x) (map f xs)
cyclicFromList :: [a] -> CyclicList a
cyclicFromList [] = error "Cyclic list must have one element!"
cyclicFromList (x:xs) = CL x xs
cyclicLength :: CyclicList a -> Int
cyclicLength (CL _ xs) = length xs + 1
listCycles :: CyclicList a -> [CyclicList a]
listCycles cl = let
helper 0 _ = []
helper n cl' = cl' : (helper (n-1) $ advance cl')
in helper (cyclicLength cl) cl
instance Comonad CyclicList where
extract (CL x _) = x
duplicate = cyclicFromList . listCycles
The question I have is: what kind of benefits do I get (if any) from using the comonad instance?

The advantage of providing a type class or implementing an interface is that code, written to use that typeclass or interface, can use your code without any modifications.
What programs can be written in terms of Comonad? A Comonad provides a way to both inspect the value at the current location (without observing its neighbors) using extract and a way to observe the neighborhood of every location with duplicate or extend. Without any additional functions, this isn't terribly useful. However, if we also require other functions along with the Comonad instance, we can write programs that depend on both local data and data from elsewhere. For example, if we require functions that allow us to change location, such as your advance, we can write programs that depend only on the local structure of the data, not on the data structure itself.
For a concrete example, consider a cellular automata program written in terms of Comonad and the following Bidirectional class:
class Bidirectional c where
forward :: c a -> Maybe (c a)
backward :: c a -> Maybe (c a)
The program could use this, together with Comonad, to extract data stored in a cell and explore the cells forward and backward of the current cell. It can use duplicate to capture the neighborhood of each cell and fmap to inspect that neighborhood. This combination of fmap f . duplicate is extract f.
Here is such a program. rule' is only interesting to the example; it implements cellular automata rules on neighborhood with just the left and right values. rule extracts data from the neighborhood, given the class, and runs the rule on each neighborhood. slice pulls out even larger neighborhoods so that we can display them easily. simulate runs the simulation, displaying these larger neighborhoods for each generation.
rule' :: Word8 -> Bool -> Bool -> Bool -> Bool
rule' x l m r = testBit x ((if l then 4 else 0) .|. (if m then 2 else 0) .|. (if r then 1 else 0))
rule :: (Comonad w, Bidirectional w) => Word8 -> w Bool -> w Bool
rule x = extend go
where
go w = rule' x (maybe False extract . backward $ w) (extract w) (maybe False extract . forward $ w)
slice :: (Comonad w, Bidirectional w) => Int -> Int -> a -> w a -> [a]
slice l r a w = sliceL l w (extract w : sliceR r w)
where
sliceR r w | r > 0 = case (forward w) of
Nothing -> take r (repeat a)
Just w' -> extract w' : sliceR (r-1) w'
sliceR _ _ = []
sliceL l w r | l > 0 = case (backward w) of
Nothing -> take l (repeat a) ++ r
Just w' -> sliceL (l-1) w' (extract w':r)
sliceL _ _ r = r
simulate :: (Comonad w, Bidirectional w) => (w Bool -> w Bool) -> Int -> Int -> Int -> w Bool -> IO ()
simulate f l r x w = mapM_ putStrLn . map (map (\x -> if x then '1' else '0') . slice l r False) . take x . iterate f $ w
This program might have been intended to work with the following Bidirectional Comonad, a Zipper on a list.
data Zipper a = Zipper {
heads :: [a],
here :: a,
tail :: [a]
} deriving Functor
instance Bidirectional Zipper where
forward (Zipper _ _ [] ) = Nothing
forward (Zipper l h (r:rs)) = Just $ Zipper (h:l) r rs
backward (Zipper [] _ _) = Nothing
backward (Zipper (l:ls) h r) = Just $ Zipper ls l (h:r)
instance Comonad Zipper where
extract = here
duplicate (Zipper l h r) = Zipper (goL (h:r) l) (Zipper l h r) (goR (h:l) r)
where
goL r [] = []
goL r (h:l) = Zipper l h r : goL (h:r) l
goR l [] = []
goR l (h:r) = Zipper l h r : goR (h:l) r
But will also work with a CyclicList Bidirectional Comonad.
data CyclicList a = CL a (Seq a)
deriving (Show, Eq, Functor)
instance Bidirectional CyclicList where
forward (CL x xs) = Just $ case viewl xs of
EmptyL -> CL x xs
x' :< xs' -> CL x' (xs' |> x)
backward (CL x xs) = Just $ case viewr xs of
EmptyR -> CL x xs
xs' :> x' -> CL x' (x <| xs')
instance Comonad CyclicList where
extract (CL x _) = x
duplicate (CL x xs) = CL (CL x xs) (go (singleton x) xs)
where
go old new = case viewl new of
EmptyL -> empty
x' :< xs' -> CL x' (xs' >< old) <| go (old |> x') xs'
We can reuse simulate with either data structure. The CyclicList has a more interesting output, because, instead of bumping into a wall, it wraps back around to interact with itself.
{-# LANGUAGE DeriveFunctor #-}
import Control.Comonad
import Data.Sequence hiding (take)
import Data.Bits
import Data.Word
main = do
putStrLn "10 + 1 + 10 Zipper"
simulate (rule 110) 10 10 30 $ Zipper (take 10 . repeat $ False) True (take 10 . repeat $ False)
putStrLn "10 + 1 + 10 Cyclic"
simulate (rule 110) 10 10 30 $ CL True (fromList (take 20 . repeat $ False))

Related

Using Comonad Fix Combinators

So I've been experimenting with fixed points lately and have finally struggled
through regular fixed points enough to discover some uses; now I'm moving onto
comonadic fixed points and I'm afraid I've gotten stuck;
Here's a few examples of what I've tried and what has/hasn't worked:
{-# language DeriveFunctor #-}
{-# language FlexibleInstances #-}
module WFix where
import Control.Comonad
import Control.Comonad.Cofree
import Control.Monad.Fix
So I started with loeb's theorem as a list; each element of the list is a function
which takes the end result to compute its answer; this lets me do 'spreadsheet'
calculations where values can depend on other values.
spreadSheetFix :: [Int]
spreadSheetFix = fix $ \result -> [length result, (result !! 0) * 10, (result !! 1) + 1, sum (take 3 result)]
Okay, so I have basic fix working, time to move on to the comonad types!
Here's a few simple comonads to use for examples:
data Stream a = S a (Stream a)
deriving (Eq, Show, Functor)
next :: Stream a -> Stream a
next (S _ s) = s
instance Comonad Stream where
extract (S a _) = a
duplicate s#(S _ r) = S s (duplicate r)
instance ComonadApply Stream where
(S f fs) <#> (S a as) = S (f a) (fs <#> as)
data Tape a = Tape [a] a [a]
deriving (Show, Eq, Functor)
moveLeft, moveRight :: Tape a -> Tape a
moveLeft w#(Tape [] _ _) = w
moveLeft (Tape (l:ls) a rs) = Tape ls l (a:rs)
moveRight w#(Tape _ _ []) = w
moveRight (Tape ls a (r:rs)) = Tape (a:ls) r rs
instance Comonad Tape where
extract (Tape _ a _) = a
duplicate w#(Tape l _ r) = Tape lefts w rights
where
lefts = zipWith const (tail $ iterate moveLeft w) l
rights = zipWith const (tail $ iterate moveRight w) r
instance ComonadApply Tape where
Tape l f r <#> Tape l' a r' = Tape (zipWith ($) l l') (f a) (zipWith ($) r r')
Okay so the following combinators come from Control.Comonad;
wfix :: Comonad w => w (w a -> a) -> a
wfix w = extract w (extend wfix w)
cfix :: Comonad w => (w a -> a) -> w a
cfix f = fix (extend f)
kfix :: ComonadApply w => w (w a -> a) -> w a
kfix w = fix $ \u -> w <#> duplicate u
I started with trying out wfix:
streamWFix :: Int
streamWFix = wfix st
where
incNext = succ . extract . next
st = (S incNext (S incNext (S (const 0) st)))
> streamWFix
-- 2
This one seems to work by calling the first w a -> a on w until reaching
a resolution const 0 in this case; that makes sense. We can also do this
with a Tape:
selfReferentialWFix :: Int
selfReferentialWFix = wfix $ Tape [const 10] ((+5) . extract . moveLeft) []
-- selfReferentialWFix == 15
K, I think I get that one, but the next ones I'm kind of stuck,
I don't seem to have an intuition for what cfix is supposed to do.
Even the simplest possible thing I could think of spins forever
when I evaluate it; even trying to extract the first element of the stream
using getOne fails.
getOne :: Stream a -> a
getOne (S a _) = a
simpleCFix :: Stream Int
simpleCFix = cfix go
where
go _ = 0
Similarly with kfix; even simple tries don't seem to terminate.
My understanding of kfix was that the function in each 'slot' gets
passed a copy of the evaluated comonad focused on that spot; is that the case?
I tried using 'getOne' on this:
streamKFix :: Stream Int
streamKFix = kfix st
where
go _ = 0
st = S go st
Here's a finite attempt using Tape which also fails to run:
tapeKFix :: Tape Int
tapeKFix = kfix $ Tape [] (const 0) []
So; down to my question, could someone please offer some runnable (non-trivial)
examples of using cfix and kfix, and explain how they work? I plan to use kfix to eventually do a "Conway's
game of life" style experiment, am I correct in thinking that kfix would be useful
in working with neighbourhoods around a given cell?
Feel free to ask
any clarifying questions and help me expand my knowledge and intuition of fix!
Thanks!
The ComonadApply and Comonad instances for Tape are insufficiently lazy to be used with kfix.
duplicate for Tape requires that you prove the tape exists before it can conclude that the result is a Tape
instance Comonad Tape where
extract (Tape _ a _) = a
duplicate w#(Tape l _ r) = Tape lefts w rights
-- ^ ^
-- matches a Tape |
-- before determining that the result is a Tape
<#> checks that both arguments are tapes before it can conclude that the result is a Tape
instance ComonadApply Tape where
Tape l f r <#> Tape l' a r' = Tape (zipWith ($) l l') (f a) (zipWith ($) r r')
-- ^ ^ ^
-- matches two Tapes |
-- before detrmining that the result is a Tape
Combined there's no way for kfix (Tape _ _ _) to ever produce Tape
kfix w = fix $ \u -> w <#> duplicate u
kfix (Tape _ _ _) = fix $ \u -> (Tape _ _ _) <#> duplicate u
kfix (Tape _ _ _) = fix $ \u -> (Tape _ _ _) <#> case u of (Tape _ _ _) -> ...
-- ^ |
-- ----------- <<loop>> -------------
You can fix this by making duplicate, <#>, or both more productive by using irrefutable patterns. A pattern ~(Tape l a r) matches even if the Tape constructor hasn't been produced yet. Here's how you'd use it to make duplicate productive
instance Comonad Tape where
extract (Tape _ a _) = a
duplicate w#(~(Tape l _ r)) = Tape lefts w rights
where
lefts = zipWith const (tail $ iterate moveLeft w) l
rights = zipWith const (tail $ iterate moveRight w) r
Irrefutable pattern matches are equivalent to using functions to extract values. For duplicate it's equivalent to writing
left (Tape l _ _) = l
right (Tape _ _ r) = r
instance Comonad Tape where
extract (Tape _ a _) = a
duplicate w = Tape lefts w rights
where
l = left w
r = right w
...

Is there any type-algebra function mapping an ADT to the set of elements of that ADT?

For some simple ADTs, we can obtain the ADT of sets of that ADT as data Set = Full Set (*repeated N times) | Empty Set(*repeated N times), where N is the number of non terminal constructors of that ADT. For a more solid example, take Nat:
data Nat = Succ Nat | Zero
We can obtain a type for sets of nats as follows:
data NatSet = Full NatSet | Empty NatSet
So, for example,
empty :: NatSet
empty = Empty empty
insert :: Nat -> NatSet -> NatSet
insert Zero (Empty natverse) = Full natverse
insert Zero (Full natverse) = Full natverse
insert (Succ nat) (Empty natverse) = Empty (insert nat natverse)
insert (Succ nat) (Full natverse) = Full (insert nat natverse)
member :: Nat -> NatSet -> Bool
member Zero (Full natverse) = True
member Zero (Empty natverse) = False
member (Succ nat) (Empty natverse) = member nat natverse
member (Succ nat) (Full natverse) = member nat natverse
main = do
let set = foldr insert empty [Zero, Succ (Succ Zero), Succ (Succ (Succ (Succ (Succ Zero))))]
print $ member Zero set
print $ member (Succ Zero) set
print $ member (Succ (Succ Zero)) set
print $ member (Succ (Succ (Succ Zero))) set
print $ member (Succ (Succ (Succ (Succ Zero)))) set
print $ member (Succ (Succ (Succ (Succ (Succ Zero))))) set
For other types, it is equally simple:
data Bin = A Bin | B Bin | C
data BinSet = Empty BinSet BinSet | Full BinSet BinSet
But what about types that branch? It isn't obvious to me:
data Tree = Branch Tree Tree | Tip
data TreeSet = ???
Is there any type-algebraic argument that maps ADTs to the ADTs of sets of that type?
Let's take another look at your set type.
data NatSet = Full NatSet | Empty NatSet
There's always another NatSet inside this one; the two branches of the sum type indicate whether the current Nat is present in the set. This is a structural representation of sets. As you observed, the structure of the set depends on the structure of the values.
It's equivalent to a stream of Booleans: we test for membership by indexing into the stream.
data NatSet = NatSet Bool NatSet
empty = NatSet False empty
insert Z (NatSet _ bs) = NatSet True bs
insert (S n) (NatSet b bs) = NatSet b (insert n bs)
(NatSet b _) `contains` Z = b
(NatSet _ bs) `contains` (S n) = bs `contains` n
Based on the insight that set membership is like indexing into a collection of Booleans, let's pursue a generic implementation of sets of values of types formed as the fixed point of a polynomial functor (what's the problem?).
First things first. As you observed, the set's structure depends on the type of the things inside it. Here's the class of things which can be elements of a set,
class El a where
data Set a
empty :: Set a
full :: Set a
insert :: a -> Set a -> Set a
remove :: a -> Set a -> Set a
contains :: Set a -> a -> Bool
For a first example, I'll modify NatSet from above to suit this format.
instance El Nat where
data Set Nat = NatSet Bool (Set Nat)
empty = NatSet False empty
full = NatSet True empty
insert Z (NatSet _ bs) = NatSet True bs
insert (S n) (NatSet b bs) = NatSet b (insert n bs)
remove Z (NatSet _ bs) = NatSet False bs
remove (S n) (NatSet b bs) = NatSet b (remove n bs)
(NatSet b _) `contains` Z = b
(NatSet _ bs) `contains` (S n) = bs `contains` n
Another simple El instance which we'll need later is (). A set of ()s is either full or empty, with nothing in between.
instance El () where
newtype Set () = UnitSet Bool
empty = UnitSet False
full = UnitSet True
insert () _ = UnitSet True
remove () _ = UnitSet False
(UnitSet b) `contains` () = b
The fixed point of a functor f,
newtype Fix f = Fix (f (Fix f))
in an instance of El whenever f is an instance of the following El1 class.
class El1 f where
data Set1 f a
empty1 :: El a => Set1 f (Set a)
full1 :: El a => Set1 f (Set a)
insert1 :: El a => f a -> Set1 f (Set a) -> Set1 f (Set a)
remove1 :: El a => f a -> Set1 f (Set a) -> Set1 f (Set a)
contains1 :: El a => Set1 f (Set a) -> f a -> Bool
instance El1 f => El (Fix f) where
newtype Set (Fix f) = FixSet (Set1 f (Set (Fix f)))
empty = FixSet empty1
full = FixSet full1
insert (Fix x) (FixSet s) = FixSet (insert1 x s)
remove (Fix x) (FixSet s) = FixSet (remove1 x s)
(FixSet s) `contains` (Fix x) = s `contains1` x
As usual we'll be composing bits of functors into bigger functors, before taking the resulting functor's fixed point to produce a concrete type.
newtype I a = I a
newtype K c a = K c
data (f :* g) a = f a :* g a
data (f :+ g) a = L (f a) | R (g a)
type N = K ()
For example,
type Nat = Fix (N :+ N)
type Tree = Fix (N :+ (I :* I))
Let's make sets of these things.
I (for identity) is the location in the polynomial where Fix will plug in the recursive substructure. We can just delegate its implementation of El_ to Fix's.
instance El1 I where
newtype Set1 I a = ISet1 a
empty1 = ISet1 empty
full1 = ISet1 full
insert1 (I x) (ISet1 s) = ISet1 (insert x s)
remove1 (I x) (ISet1 s) = ISet1 (remove x s)
(ISet1 s) `contains1` (I x) = s `contains` x
The constant functor K c features no recursive substructure, but it does feature a value of type c. If c can be put in a set, K c can be too.
instance El c => El1 (K c) where
newtype Set1 (K c) a = KSet1 (Set c)
empty1 = KSet1 empty
full1 = KSet_ full
insert1 (K x) (KSet1 s) = KSet1 (insert x s)
remove1 (K x) (KSet1 s) = KSet1 (remove x s)
(KSet1 s) `contains1` (K x) = s `contains` x
Note that this definition makes Set1 N isomorphic to Bool.
For sums, let's use our intuition that testing for membership is like indexing. When you index into a tuple, you choose between the left-hand and right-hand members of the tuple.
instance (El1 f, El1 g) => El1 (f :+ g) where
data Set1 (f :+ g) a = SumSet1 (Set1 f a) (Set1 g a)
empty1 = SumSet1 empty1 empty1
full1 = SumSet1 full1 full1
insert1 (L x) (SumSet1 l r) = SumSet1 (insert1 x l) r
insert1 (R y) (SumSet1 l r) = SumSet1 l (insert1 y r)
remove1 (L x) (SumSet1 l r) = SumSet1 (remove1 x l) r
remove1 (R y) (SumSet1 l r) = SumSet1 l (remove1 y r)
(SumSet1 l r) `contains1` (L x) = l `contains1` x
(SumSet1 l r) `contains1` (R y) = r `contains1` y
This is now enough to do sets of natural numbers. With appropriate instances of Show, you can do this:
ghci> let z = Fix (L (K ()))
ghci> let s n = Fix (R (I n))
ghci> insert (s z) empty `contains` z
False
ghci> insert (s z) empty `contains` (s z)
True
ghci> empty :: Set (Fix (N :+ I))
FixSet (SumSet1 (KSet1 (UnitSet False)) (ISet1 (FixSet ( -- to infinity and beyond
I still haven't answered your original question, which was how should this work for product types? I can think of a few strategies, but none of them actually work.
We could make Set1 (f :* g) a a sum type. This has a pleasing symmetry: sets of sums are products, and sets of products are sums. In the context of the indexing idea, this is like saying "in order to get a Bool out of the set, you have to give an index to handle each of the two possible cases", rather like Either a b's elimination rule (a -> c) -> (b -> c) -> Either a b -> c. However, you get stuck when you try to come up with meaningful values for empty1 and full1:
instance (El1 f, El1 g) => El1 (f :* g) where
data Set1 (f :* g) a = LSet1 (Set1 f a) | RSet1 (Set1 g a)
insert1 (l :* r) (LSet1 x) = LSet1 (insert1 l x)
insert1 (l :* r) (RSet1 y) = RSet1 (insert1 r y)
remove1 (l :* r) (LSet1 x) = LSet1 (remove1 l x)
remove1 (l :* r) (RSet1 y) = RSet1 (remove1 r y)
(LSet1 x) `contains1` (l :* r) = x `contains1` l
(RSet1 y) `contains1` (l :* r) = y `contains1` r
empty1 = _
full1 = _
You could try adding hacky Empty and Full constructors to Set1 (f :* g) but then you'd struggle to implement insert1 and remove1.
You could interpret a set of a product type as a pair of the sets of the two halves of the product. An item is in the set if both of its halves are in their respective sets. Like a sort of generalised intersection.
instance (El1 f, El1 g) => El1 (f :* g) where
data Set1 (f :* g) a = ProdSet1 (Set1 f a) (Set1 g a)
insert1 (l :* r) (ProdSet1 s t) = ProdSet1 (insert1 l s) (insert1 r t)
remove1 (l :* r) (ProdSet1 s t) = ProdSet1 (remove1 l s) (remove1 r t)
(ProdSet1 s t) `contains1` (l :* r) = (s `contains1` l) && (t `contains1` r)
empty1 = ProdSet1 empty1 empty1
full1 = ProdSet1 full1 full1
But this implementation doesn't work correctly. Observe:
ghci> let tip = Fix (L (K ()))
ghci> let fork l r = Fix (R (I l :* I r))
ghci> let s1 = insert (fork tip tip) empty
ghci> let s2 = remove (fork tip (fork tip tip)) s1
ghci> s2 `contains` (fork tip tip)
False
Removing fork tip (fork tip tip) also removed fork tip tip. tip got removed from the left-hand half of the set, which meant any tree whose left-hand branch is tip got removed with it. We removed more items than we intended. (However, if you don't need a remove operation on your sets, this implementation works - though that's just another disappointing asymmetry.) You could implement contains with || instead of && but then you'll end up inserting more items than you intended.
Finally, I also thought about treating a set of products as a set of sets.
data Set1 (f :* g) a = ProdSet1 (Set1 f (Set1 g a))
This doesn't work - try implementing any of El1's methods and you'll get stuck right away.
So at the end of the day your intuition was right. Product types are the problem. There's no meaningful structural representation of sets for product types.
Of course, this is all academic really. Fixed points of polynomial functors have a well-defined notion of equality and ordering, so you can do what everyone else does and use Data.Set.Set to represent sets, even for product types. It won't have such good asymptotics, though: equality and ordering tests on such values are O(n) (where n is the size of the value), making membership operations on such sets O(n log m) (where m is the size of the set) because the set itself is represented as a balanced tree. Contrast with our generic structural sets, where membership operations are O(n).

Indexing into containers: the mathematical underpinnings

When you want to pull an element out of a data structure, you have to give its index. But the meaning of index depends on the data structure itself.
class Indexed f where
type Ix f
(!) :: f a -> Ix f -> Maybe a -- indices can be out of bounds
For example...
Elements in a list have numeric positions.
data Nat = Z | S Nat
instance Indexed [] where
type Ix [] = Nat
[] ! _ = Nothing
(x:_) ! Z = Just x
(_:xs) ! (S n) = xs ! n
Elements in a binary tree are identified by a sequence of directions.
data Tree a = Leaf | Node (Tree a) a (Tree a)
data TreeIx = Stop | GoL TreeIx | GoR TreeIx -- equivalently [Bool]
instance Indexed Tree where
type Ix Tree = TreeIx
Leaf ! _ = Nothing
Node l x r ! Stop = Just x
Node l x r ! GoL i = l ! i
Node l x r ! GoR j = r ! j
Looking for something in a rose tree entails stepping down the levels one at a time by selecting a tree from the forest at each level.
data Rose a = Rose a [Rose a] -- I don't even like rosé
data RoseIx = Top | Down Nat RoseIx -- equivalently [Nat]
instance Indexed Rose where
type Ix Rose = RoseIx
Rose x ts ! Top = Just x
Rose x ts ! Down i j = ts ! i >>= (! j)
It seems that the index of a product type is a sum (telling you which arm of the product to look at), the index of an element is the unit type, and the index of a nested type is a product (telling you where to look in the nested type). Sums seem to be the only one which aren't somehow linked to the derivative. The index of a sum is also a sum - it tells you which part of the sum the user is hoping to find, and if that expectation is violated you're left with a handful of Nothing.
In fact I had some success implementing ! generically for functors defined as the fixed point of a polynomial bifunctor. I won't go into detail, but Fix f can be made an instance of Indexed when f is an instance of Indexed2...
class Indexed2 f where
type IxA f
type IxB f
ixA :: f a b -> IxA f -> Maybe a
ixB :: f a b -> IxB f -> Maybe b
... and it turns out you can define an instance of Indexed2 for each of the bifunctor building blocks.
But what's really going on? What is the underlying relationship between a functor and its index? How does it relate to the functor's derivative? Does one need to understand the theory of containers (which I don't, really) to answer this question?
It seems like the index into the type is an index into the set of constructors, following by an index into the product representing that constructor. This can be implemented quite naturally with e.g. generics-sop.
First you need a datatype to represent possible indices into a single element of the product. This could be an index pointing to an element of type a,
or an index pointing to something of type g b - which requires an index pointing into g and an index pointing to an element of type a in b. This is encoded with the following type:
import Generics.SOP
data ArgIx f x x' where
Here :: ArgIx f x x
There :: (Generic (g x')) => Ix g -> ArgIx f x x' -> ArgIx f x (g x')
newtype Ix f = ...
The index itself is just a sum (implemented by NS for n-ary sum) of sums over the generic representation of the type (choice of constructor, choice of constructor element):
newtype Ix f = MkIx (forall x . NS (NS (ArgIx f x)) (Code (f x)))
You can write smart constructors for various indices:
listIx :: Natural -> Ix []
listIx 0 = MkIx $ S $ Z $ Z Here
listIx k = MkIx $ S $ Z $ S $ Z $ There (listIx (k-1)) Here
treeIx :: [Bool] -> Ix Tree
treeIx [] = MkIx $ S $ Z $ S $ Z Here
treeIx (b:bs) =
case b of
True -> MkIx $ S $ Z $ Z $ There (treeIx bs) Here
False -> MkIx $ S $ Z $ S $ S $ Z $ There (treeIx bs) Here
roseIx :: [Natural] -> Ix Rose
roseIx [] = MkIx $ Z $ Z Here
roseIx (k:ks) = MkIx $ Z $ S $ Z $ There (listIx k) (There (roseIx ks) Here)
Note that e.g. in the list case, you cannot construct an (non-bottom) index pointing to the [] constructor - likewise for Tree and Empty, or constructors containing values whose type is not a or something containing some values of type a. The quantification in MkIx prevents the construction bad things like an index pointing to the first Int in data X x = X Int x where x is instantiated to Int.
The implementation of the index function is fairly straightforward, even if the types are scary:
(!) :: (Generic (f x)) => f x -> Ix f -> Maybe x
(!) arg (MkIx ix) = go (unSOP $ from arg) ix where
atIx :: a -> ArgIx f x a -> Maybe x
atIx a Here = Just a
atIx a (There ix0 ix1) = a ! ix0 >>= flip atIx ix1
go :: (All SListI xss) => NS (NP I) xss -> NS (NS (ArgIx f x)) xss -> Maybe x
go (Z a) (Z b) = hcollapse $ hzipWith (\(I x) -> K . atIx x) a b
go (S x) (S x') = go x x'
go Z{} S{} = Nothing
go S{} Z{} = Nothing
The go function compares the constructor pointed to by the index and the actual constructor used by the type. If the constructors don't match, the indexing returns Nothing. If they do, the actual indexing is done - which is trivial in the case that the index points exactly Here, and in the case of some substructure, both indexing operations must succeed one after the other, which is handled by >>=.
And a simple test:
>map (("hello" !) . listIx) [0..5]
[Just 'h',Just 'e',Just 'l',Just 'l',Just 'o',Nothing]

Memoizing a function of type [Integer] -> a

My problem is how to efficiently memoize an expensive function f :: [Integer] -> a that is defined for all finite lists of integers and has the property f . sort = f?
My typical use case is that given a list as of integers I need to obtain the values f (a:as) for various Integer a, so I'd like to build up simultaneously a directed labelled graph whose vertices are pairs of an Integer list and its function value. An edge labelled by a from (as, f as) to (bs, f bs) exists if and only if a:as = bs.
Stealing from a brilliant answer by Edward Kmett I simply copied
{-# LANGUAGE BangPatterns #-}
data Tree a = Tree (Tree a) a (Tree a)
instance Functor Tree where
fmap f (Tree l m r) = Tree (fmap f l) (f m) (fmap f r)
index :: Tree a -> Integer -> a
index (Tree _ m _) 0 = m
index (Tree l _ r) n = case (n - 1) `divMod` 2 of
(q,0) -> index l q
(q,1) -> index r q
nats :: Tree Integer
nats = go 0 1
where go !n !s = Tree (go l s') n (go r s')
where l = n + s
r = l + s
s' = s * 2
and adapted his idea to my problem as
-- directed graph labelled by Integers
data Graph a = Graph a (Tree (Graph a))
instance Functor Graph where
fmap f (Graph a t) = Graph (f a) (fmap (fmap f) t)
-- walk the graph following the given labels
walk :: Graph a -> [Integer] -> a
walk (Graph a _) [] = a
walk (Graph _ t) (x:xs) = walk (index t x) xs
-- graph of all finite integer sequences
intSeq :: Graph [Integer]
intSeq = Graph [] (fmap (\n -> fmap (n:) intSeq) nats)
-- could be replaced by Data.Strict.Pair
data StrictPair a b = StrictPair !a !b
deriving Show
-- f = sum modified according to Edward's idea (the real function is more complicated)
g :: ([Integer] -> StrictPair Integer [Integer]) -> [Integer] -> StrictPair Integer [Integer]
g mf [] = StrictPair 0 []
g mf (a:as) = StrictPair (a+x) (a:as)
where StrictPair x y = mf as
g_graph :: Graph (StrictPair Integer [Integer])
g_graph = fmap (g g_m) intSeq
g_m :: [Integer] -> StrictPair Integer [Integer]
g_m = walk g_graph
This works OK, but as the function f is independent of the order of the occurring integers (but not of their counts) there should be only one vertex in the graph for all integer lists equal up to ordering.
How do I achieve this?
How about just defining g_m' = g_m . sort, i.e. you simply sort the input list first before calling your memoized function?
I have a feeling this is the best you can do since if you want your memoized graph to consist of only sorted paths someone is going to have to look at all of the elements of the list before constructing the path.
Depending on what your input lists look like it might be helpful to transform them in a way which makes the trees branch less. For instance, you might try sorting and taking differences:
original input list: [8,3,14,8,5]
sorted: [3,3,8,8,14]
diffed: [3,0,5,0,6] -- use this as the key
The transformation is a bijection, and the trees branch less because there are smaller numbers involved.
You can use a bit different approach.
There is a trick in proof that a finite product of countable sets is countable:
We can map the sequence [a1, ..., an] to Nat by product . zipWith (^) primes: 2 ^ a1 * 3 ^ a2 * 5 ^ a3 * ... * primen ^ an.
To avoid problems with sequences with zero at the end, we can increase the last index.
As the sequence is ordered, we can exploit the property as user5402 mentioned.
The benefit of using the tree, is that you can increase branching to speed-up traversal. OTOH prime trick could make indexes quite big, but hopefully some tree paths will just be unexplored (remain as thunks).
{-# LANGUAGE BangPatterns #-}
-- Modified from Kmett's answer:
data Tree a = Tree a (Tree a) (Tree a) (Tree a) (Tree a)
instance Functor Tree where
fmap f (Tree x a b c d) = Tree (f x) (fmap f a) (fmap f b) (fmap f c) (fmap f d)
index :: Tree a -> Integer -> a
index (Tree x _ _ _ _) 0 = x
index (Tree _ a b c d) n = case (n - 1) `divMod` 4 of
(q,0) -> index a q
(q,1) -> index b q
(q,2) -> index c q
(q,3) -> index d q
nats :: Tree Integer
nats = go 0 1
where
go !n !s = Tree n (go a s') (go b s') (go c s') (go d s')
where
a = n + s
b = a + s
c = b + s
d = c + s
s' = s * 4
toList :: Tree a -> [a]
toList as = map (index as) [0..]
-- Primes -- https://www.haskell.org/haskellwiki/Prime_numbers
-- Generation and factorisation could be done much better
minus (x:xs) (y:ys) = case (compare x y) of
LT -> x : minus xs (y:ys)
EQ -> minus xs ys
GT -> minus (x:xs) ys
minus xs _ = xs
primes = 2 : sieve [3..] primes
where
sieve xs (p:ps) | q <- p*p , (h,t) <- span (< q) xs =
h ++ sieve (t `minus` [q, q+p..]) ps
addToLast :: [Integer] -> [Integer]
addToLast [] = []
addToLast [x] = [x + 1]
addToLast (x:xs) = x : addToLast xs
subFromLast :: [Integer] -> [Integer]
subFromLast [] = []
subFromLast [x] = [x - 1]
subFromLast (x:xs) = x : subFromLast xs
addSubProp :: [NonNegative Integer] -> Property
addSubProp xs = xs' === subFromLast (addToLast xs')
where xs' = map getNonNegative xs
-- Trick from user5402 answer
toDiffList :: [Integer] -> [Integer]
toDiffList = toDiffList' 0
where toDiffList' _ [] = []
toDiffList' p (x:xs) = x - p : toDiffList' x xs
fromDiffList :: [Integer] -> [Integer]
fromDiffList = fromDiffList' 0
where fromDiffList' _ [] = []
fromDiffList' p (x:xs) = p + x : fromDiffList' (x + p) xs
diffProp :: [Integer] -> Property
diffProp xs = xs === fromDiffList (toDiffList xs)
listToInteger :: [Integer] -> Integer
listToInteger = product . zipWith (^) primes . addToLast
integerToList :: Integer -> [Integer]
integerToList = subFromLast . impl primes 0
where impl _ _ 0 = []
impl _ 0 1 = []
impl _ k 1 = [k]
impl (p:ps) k n = case n `divMod` p of
(n', 0) -> impl (p:ps) (k + 1) n'
(_, _) -> k : impl ps 0 n
listProp :: [NonNegative Integer] -> Property
listProp xs = xs' === integerToList (listToInteger xs')
where xs' = map getNonNegative xs
toIndex :: [Integer] -> Integer
toIndex = listToInteger . toDiffList
fromIndex :: Integer -> [Integer]
fromIndex = fromDiffList . integerToList
-- [1,0] /= [0]
-- Decreasing sequence!
doesntHold :: [NonNegative Integer] -> Property
doesntHold xs = xs' === fromIndex (toIndex xs')
where xs' = map getNonNegative xs
holds :: [NonNegative Integer] -> Property
holds xs = xs' === fromIndex (toIndex xs')
where xs' = sort $ map getNonNegative xs
g :: ([Integer] -> Integer) -> [Integer] -> Integer
g mg = g' . sort
where g' [] = 0
g' (x:xs) = x + sum (map mg $ tails xs)
g_tree :: Tree Integer
g_tree = fmap (g faster_g' . fromIndex) nats
faster_g' :: [Integer] -> Integer
faster_g' = index g_tree . toIndex
faster_g = faster_g' . sort
On my machine fix g [1..22] feels slow, when faster_g [1..40] is still blazing fast.
Addition: if we have bounded set (with indexes 0..n-1) , we can encode it as: a0 * n^0 + a1 * n^1 ....
We can encode any Integer as binary list, e.g. 11 is [1, 1, 0, 1] (least bit first).
Then if we separate integers in the list with 2, we get sequence of bounded values.
As bonus we can take the sequence of 0, 1, 2 digits and compress it to binary using e.g. Huffman encoding, as 2 is much rarer than 0 or 1. But this might be overkill.
With this trick, indexes stay much smaller and the space probably is better packed.
{-# LANGUAGE BangPatterns #-}
-- From Kment's answer:
import Data.Function (fix)
import Data.List (sort, tails)
import Data.List.Split (splitOn)
import Test.QuickCheck
{-- Tree definition as before --}
-- 0, 1, 2
newtype N3 = N3 { unN3 :: Integer }
deriving (Eq, Show)
instance Arbitrary N3 where
arbitrary = elements $ map N3 [ 0, 1, 2 ]
-- Integer <-> N3
coeffs3 :: [Integer]
coeffs3 = coeffs' 1
where coeffs' n = n : coeffs' (n * 3)
listToInteger :: [N3] -> Integer
listToInteger = sum . zipWith f coeffs3
where f n (N3 m) = n * m
listFromInteger :: Integer -> [N3]
listFromInteger 0 = []
listFromInteger n = case n `divMod` 3 of
(q, m) -> N3 m : listFromInteger q
listProp :: [N3] -> Property
listProp xs = (null xs || last xs /= N3 0) ==> xs === listFromInteger (listToInteger xs)
-- Integer <-> N2
-- 0, 1
newtype N2 = N2 { unN2 :: Integer }
deriving (Eq, Show)
coeffs2 :: [Integer]
coeffs2 = coeffs' 1
where coeffs' n = n : coeffs' (n * 2)
integerToBin :: Integer -> [N2]
integerToBin 0 = []
integerToBin n = case n `divMod` 2 of
(q, m) -> N2 m : integerToBin q
integerFromBin :: [N2] -> Integer
integerFromBin = sum . zipWith f coeffs2
where f n (N2 m) = n * m
binProp :: NonNegative Integer -> Property
binProp (NonNegative n) = n === integerFromBin (integerToBin n)
-- unsafe!
n3ton2 :: N3 -> N2
n3ton2 = N2 . unN3
n2ton3 :: N2 -> N3
n2ton3 = N3 . unN2
-- [Integer] <-> [N3]
integerListToN3List :: [Integer] -> [N3]
integerListToN3List = concatMap (++ [N3 2]) . map (map n2ton3 . integerToBin)
integerListFromN3List :: [N3] -> [Integer]
integerListFromN3List = init . map (integerFromBin . map n3ton2) . splitOn [N3 2]
n3ListProp :: [NonNegative Integer] -> Property
n3ListProp xs = xs' === integerListFromN3List (integerListToN3List xs')
where xs' = map getNonNegative xs
-- Trick from user5402 answer
-- Integer <-> Sorted Integer
toDiffList :: [Integer] -> [Integer]
toDiffList = toDiffList' 0
where toDiffList' _ [] = []
toDiffList' p (x:xs) = x - p : toDiffList' x xs
fromDiffList :: [Integer] -> [Integer]
fromDiffList = fromDiffList' 0
where fromDiffList' _ [] = []
fromDiffList' p (x:xs) = p + x : fromDiffList' (x + p) xs
diffProp :: [Integer] -> Property
diffProp xs = xs === fromDiffList (toDiffList xs)
---
toIndex :: [Integer] -> Integer
toIndex = listToInteger . integerListToN3List . toDiffList
fromIndex :: Integer -> [Integer]
fromIndex = fromDiffList . integerListFromN3List . listFromInteger
-- [1,0] /= [0]
-- Decreasing sequence! doesn't terminate in this case
doesntHold :: [NonNegative Integer] -> Property
doesntHold xs = xs' === fromIndex (toIndex xs')
where xs' = map getNonNegative xs
holds :: [NonNegative Integer] -> Property
holds xs = xs' === fromIndex (toIndex xs')
where xs' = sort $ map getNonNegative xs
g :: ([Integer] -> Integer) -> [Integer] -> Integer
g mg = g' . sort
where g' [] = 0
g' (x:xs) = x + sum (map mg $ tails xs)
g_tree :: Tree Integer
g_tree = fmap (g faster_g' . fromIndex) nats
faster_g' :: [Integer] -> Integer
faster_g' = index g_tree . toIndex
faster_g = faster_g' . sort
Second addition:
I quickly benchmarked graph and binary sequence approach for my g with:
main :: IO ()
main = do
n <- read . head <$> getArgs
print $ faster_g [100, 110..n]
And the results are:
% time ./IntegerMemo 1000
1225560638892526472150132981770
./IntegerMemo 1000 0.19s user 0.01s system 98% cpu 0.200 total
% time ./IntegerMemo 2000
3122858113354873680008305238045814042010921833620857170165770
./IntegerMemo 2000 1.83s user 0.05s system 99% cpu 1.888 total
% time ./IntegerMemo 2500
4399449191298176980662410776849867104410434903220291205722799441218623242250
./IntegerMemo 2500 3.74s user 0.09s system 99% cpu 3.852 total
% time ./IntegerMemo 3000
5947985907461048240178371687835977247601455563536278700587949163642187584269899171375349770
./IntegerMemo 3000 6.66s user 0.13s system 99% cpu 6.830 total
% time ./IntegerMemoGrap 1000
1225560638892526472150132981770
./IntegerMemoGrap 1000 0.10s user 0.01s system 97% cpu 0.113 total
% time ./IntegerMemoGrap 2000
3122858113354873680008305238045814042010921833620857170165770
./IntegerMemoGrap 2000 0.97s user 0.04s system 98% cpu 1.028 total
% time ./IntegerMemoGrap 2500
4399449191298176980662410776849867104410434903220291205722799441218623242250
./IntegerMemoGrap 2500 2.11s user 0.08s system 99% cpu 2.202 total
% time ./IntegerMemoGrap 3000
5947985907461048240178371687835977247601455563536278700587949163642187584269899171375349770
./IntegerMemoGrap 3000 3.33s user 0.09s system 99% cpu 3.452 total
Looks like that graph version is faster by constant factor of 2. But they seem to have same time complexity :)
Looks like my problem is solved by simply replacing intSeq in the definition of g_graph by a monotone version:
-- replace vertexes for non-monotone integer lists by the according monotone one
monoIntSeq :: Graph [Integer]
monoIntSeq = f intSeq
where f (Graph as t) | as == sort as = Graph as $ fmap f t
| otherwise = fetch monIntSeq $ sort as
-- extract the subgraph after following the given labels
fetch :: Graph a -> [Integer] -> Graph a
fetch g [] = g
fetch (Graph _ t) (x:xs) = fetch (index t x) xs
g_graph :: Graph (StrictPair Integer [Integer])
g_graph = fmap (g g_m) monoIntSeq
Many thanks to all (especially user5402 and Oleg) for the help!
Edit: I still have the problem that the memory consumption is to high for my typical use case which can be described by following a path like this:
p :: [Integer]
p = map f [1..]
where f n | n `mod` 6 == 0 = n `div` 6
| n `mod` 3 == 0 = n `div` 3
| n `mod` 2 == 0 = n `div` 2
| otherwise = n
A slight improvement is to define the monotone integer sequences directly like this:
-- extract the subgraph after following the given labels (right to left)
fetch :: Graph a -> [Integer] -> Graph a
fetch = foldl' step
where step (Graph _ t) n = index t n
-- walk the graph following the given labels (right to left)
walk :: Graph a -> [Integer] -> a
walk g ns = a
where Graph a _ = fetch g ns
-- all monotone falling integer sequences
monoIntSeqs :: Graph [Integer]
monoIntSeqs = Graph [] $ fmap (flip f monoIntSeqs) nats
where f n (Graph ns t) | null ns = Graph (n:ns) $ fmap (f n) t
| n >= head ns = Graph (n:ns) $ fmap (f n) t
| otherwise = fetch monoIntSeqs (insert' n ns)
insert' = insertBy (comparing Down)
But at the end I might just use the original integer sequences without identification, identify nodes now and then explicitly and avoid keeping a reference to g_graph etc to let the garbage collection clean up as the program proceeds.
Reading the functional pearl Trouble Shared is Trouble Halved by Richard Bird and Ralf Hinze, I understood how to implement, what I was looking for two years ago (again based on Edward Kmett's trick):
{-# LANGUAGE BangPatterns #-}
import Data.Function (fix)
data Tree a = Tree (Tree a) a (Tree a)
deriving Show
instance Functor Tree where
fmap f (Tree l m r) = Tree (fmap f l) (f m) (fmap f r)
index :: Tree a -> Integer -> a
index (Tree _ m _) 0 = m
index (Tree l _ r) n = case (n - 1) `divMod` 2 of
(q,0) -> index l q
(q,1) -> index r q
nats :: Tree Integer
nats = go 0 1
where go !n !s = Tree (go l s') n (go r s')
where l = n + s
r = l + s
s' = s * 2
data IntSeqTree a = IntSeqTree a (Tree (IntSeqTree a))
val :: IntSeqTree a -> a
val (IntSeqTree a _) = a
step :: Integer -> IntSeqTree t -> IntSeqTree t
step n (IntSeqTree _ ts) = index ts n
intSeqTree :: IntSeqTree [Integer]
intSeqTree = fix $ create []
where create p x = IntSeqTree p $ fmap (extend x) nats
extend x n = case span (>n) (val x) of
([], p) -> fix $ create (n:p)
(m, p) -> foldr step intSeqTree (m ++ n:p)
instance Functor IntSeqTree where
fmap f (IntSeqTree a t) = IntSeqTree (f a) (fmap (fmap f) t)
In my use case I have hundreds or thousands of similar integer sequences (of length few hundred entries) that are generated incrementally. So for me this way is cheaper than sorting the sequences before looking up the function value (which I will access by using fmap on intSeqTree).

modify edge label in Haskell package fgl

I've wrote the following code to increment the label of a given edge of a graph with FGL package, if the edge does not exist, it is created before being incremented :
import Data.Graph.Inductive
incrementEdge :: Edge -> Gr a Int -> Gr a Int
incrementEdge edge g = gmap (increment edge) g
increment :: Edge -> Context a Int -> Context a Int
increment (a,b) all#(p,n,x,v) = if a /= n then all else (p,n,x,v'')
where
v' = let (r,_) = elemNode b v in if r then v else ((0,b):v)
v'' = map (\(x,y) -> if y == b then (x+1,y) else (x,y)) v'
a :: Gr String Int
a = ([],1,"a",[]) & empty
b = ([],2,"b",[]) & a
while testing I got the following result :
*Main> incrementEdge (1,1) b
1:"a"->[(1,1)]
2:"b"->[]
*Main> incrementEdge (1,2) b
1:"a"->[(1,2)]
2:"b"->[]
*Main> incrementEdge (2,2) b
1:"a"->[]
2:"b"->[(1,2)]
But ...
*Main> incrementEdge (2,1) b
*** Exception: Edge Exception, Node: 1
what is the problem here ?
EDITION
elemNode ys [] = (False,0)
elemNode ys ((m,xs):xss) = if ys == xs then (True,m) else elemNode ys xss
I want to write a function which will add an edge to a graph from two nodes labels, the function checks that the two nodes exist, if not it create them :
- if nodes already exists the label of the edge between them is increment,
- if there is no edge between those node it is create before being incremented
Thanks for your reply
I don't think you're supposed to add edges with gmap: it folds over all the contexts in the graph in an arbitrary order and builds up the new graph by &ing the new contexts together. If a new context has a link to or from a node that hasn't been &ed yet, you get the Edge Exception.
Here's a simple example:
*Main> ([], 1, "a", [(0, 2)]) & empty :: Gr String Int
*** Exception: Edge Exception, Node: 2
I've only used FGL for a couple of little projects and am certainly no expert, but it probably makes more sense just to add new edges (with label 1) using insEdge and then do all the counting when needed:
import Data.Graph.Inductive
import qualified Data.IntMap as I
incrementEdge :: Edge -> Gr a Int -> Gr a Int
incrementEdge (a, b) = insEdge (a, b, 1)
count :: Gr a Int -> Gr a Int
count = gmap $ \(p, n, x, v) -> (countAdj p, n, x, countAdj v)
where
swap (a, b) = (b, a)
countAdj = map swap . I.toList . I.fromListWith (+) . map swap
This seems to work as desired:
*Main> count $ incrementEdge (2, 1) b
1:"a"->[]
2:"b"->[(1,1)]
*Main> count $ incrementEdge (2, 1) $ incrementEdge (2, 1) b
1:"a"->[]
2:"b"->[(2,1)]
1) A quick grep for Edge Exception in the fgl package:
cabal unpack fgl
cd fgl*
grep "Edge Exception" * -R
yields the file Data/Graph/Inductive/Tree.hs. Looking there we have the call updAdj that will throw this exception any time elemFM g v is false.
2) Could you provide runnable code? What you posted is missing elemNode (when using fgl 5.4.2.3)
3) Could you provide what version of fgl you're using? If it's old an upgrade might fix the issue.
Mapping over the graph doesn't seem like quite the right kind of traversal. The following works with the extracted context of the edge's source node.
edgeLookup :: Node -> [(a,Node)] -> Maybe ((a,Node), [(a,Node)])
edgeLookup n = aux . break ((== n) . snd)
where aux (h, []) = Nothing
aux (h, t:ts) = Just (t, h ++ ts)
incrementEdge :: Edge -> Gr a Int -> Maybe (Gr a Int)
incrementEdge (from,to) g = aux $ match from g
where aux (Nothing, _) = Nothing
aux (Just (i,n,l,o), g') = Just $ (i,n,l,checkEdge o) & g'
checkEdge outEdges =
maybe ((1,to):outEdges) incEdge $ edgeLookup to outEdges
incEdge ((cnt,n), rst) = (cnt+1,n):rst
I would probably also use a helper to go from (Maybe a, b) -> Maybe (a,b) then fmap aux over the helper composed with match. That would help to distill things down a bit better.
EDIT
To support node addition based on labels, one needs to track the bijection between labels and Node identifiers (Ints). This can be done by using a Map that is updated in parallel to the graph.
import Data.Graph.Inductive
import Data.Map (Map)
import qualified Data.Map as M
import Data.Maybe (fromJust)
-- A graph with uniquely labeled nodes.
type LGraph a b = (Map a Int, Gr a b)
-- Ensure that a node with the given label is present in the given
-- 'LGraph'. Return the Node identifier for the node, and a graph that
-- includes the node.
addNode :: Ord a => a -> LGraph a b -> (Int, LGraph a b)
addNode label (m,g) = aux $ M.lookup label m
where aux (Just nid) = (nid, (m,g))
aux Nothing = (nid', (m', g'))
[nid'] = newNodes 1 g
m' = M.insert label nid' m
g' = insNode (nid', label) g
-- Adding a context to a graph requires updating the label map.
(&^) :: Ord a => Context a b -> LGraph a b -> LGraph a b
c#(_, n, label, _) &^ (m,g) = (m', g')
where m' = M.insert label n m
g' = c & g
-- Look for a particular 'Node' in an edge list.
edgeLookup :: Node -> [(a,Node)] -> Maybe ((a,Node), [(a,Node)])
edgeLookup n = aux . break ((== n) . snd)
where aux (h, []) = Nothing
aux (h, t:ts) = Just (t, h ++ ts)
-- Increment the edge between two nodes; create a new edge if needed.
incrementEdge :: Edge -> Gr a Int -> Maybe (Gr a Int)
incrementEdge (from,to) g = fmap aux $ liftMaybe (match from g)
where aux ((i,n,l,o), g') = (i,n,l,checkEdge o) & g'
checkEdge outEdges =
maybe ((1,to):outEdges) incEdge $ edgeLookup to outEdges
incEdge ((cnt,n), rst) = (cnt+1,n):rst
liftMaybe :: (Maybe a, b) -> Maybe (a, b)
liftMaybe (Nothing, _) = Nothing
liftMaybe (Just x, y) = Just (x, y)
-- Increment an edge in an 'LGraph'. If the nodes are not part of the
-- graph, add them.
incrementLEdge :: Ord a => (a, a) -> LGraph a Int -> LGraph a Int
incrementLEdge (from,to) g = (m', fromJust $ incrementEdge' (from',to') g')
where (from', gTmp) = addNode from g
(to', (m',g')) = addNode to gTmp
-- Example
a' :: LGraph String Int
a' = ([],1,"a",[]) &^ (M.empty, empty)
b' = ([],2,"b",[]) &^ a'
test6 = incrementLEdge ("c","b") $ incrementLEdge ("b","a") b'
{-
*Main> test6
(fromList [("a",1),("b",2),("c",3)],
1:"a"->[]
2:"b"->[(1,1)]
3:"c"->[(1,2)])
-}

Resources