Explicit Purely-Functional Data-Structure For Difference Lists - haskell

In Haskell, difference lists, in the sense of
[a] representation of a list with an efficient concatenation operation
seem to be implemented in terms of function composition.
Functions and (dynamic) function compositions, though, must also be represented somehow in the computer's memory using data structures, which raises the question of how dlists could be implemented in Haskell without using function compositions, but, rather, through some basic purely-functional node-based data structures. How could this be done with the same performance guarantees as those of function composition?

(++)'s notoriously bad asymptotics come about when you use it in a left-associative manner - that is, when (++)'s left argument is the result of another call to (++). Right-associative expressions run efficiently, though.
To put it more concretely: evaluating a left-nested append of m lists like
((ws ++ xs) ++ ys) ++ zs -- m = 3 in this example
to WHNF requires you to force m thunks, because (++) is strict in its left argument.
case (
case (
case ws of { [] -> xs ; (w:ws) -> w:(ws ++ xs) }
) of { [] -> ys ; (x:xs) -> x:(xs ++ ys) }
) of { [] -> zs ; (y:ys) -> y:(ys ++ zs) }
In general, to fully evaluate n elements of such a list, this'll require forcing that stack of m thunks n times, for O(m*n) complexity. When the entire list is built from appends of singleton lists (ie (([w] ++ [x]) ++ [y]) ++ [z]), m = n so the cost is O(n2).
Evaluating a right-nested append like
ws ++ (xs ++ (ys ++ zs))
to WHNF is much easier (O(1)):
case ws of
[] -> xs ++ (ys ++ zs)
(w:ws) -> w:(ws ++ (xs ++ (ys ++ zs)))
Evaluating n elements just requires evaluating n thunks, which is about as good as you can expect to do.
Difference lists work by paying a small (O(m)) up-front cost to automatically re-associate calls to (++).
newtype DList a = DList ([a] -> [a])
fromList xs = DList (xs ++)
toList (DList f) = f []
instance Monoid (DList a) where
mempty = fromList []
DList f `mappend` DList g = DList (f . g)
Now a left-nested expression,
toList (((fromList ws <> fromList xs) <> fromList ys) <> fromList zs)
gets evaluated in a right-associative manner:
((((ws ++) . (xs ++)) . (ys ++)) . (zs ++)) []
-- expand innermost (.)
(((\l0 -> ws ++ (xs ++ l0)) . (ys ++)) . (zs ++)) []
-- expand innermost (.)
((\l1 -> (\l0 -> ws ++ (xs ++ l0)) (ys ++ l1)) . (zs ++)) []
-- beta reduce
((\l1 -> ws ++ (xs ++ (ys ++ l1))) . (zs ++)) []
-- expand innermost (.)
(\l2 -> (\l1 -> ws ++ (xs ++ (ys ++ l1))) (zs ++ l2)) []
-- beta reduce
(\l2 -> ws ++ (xs ++ (ys ++ (zs ++ l2)))) []
-- beta reduce
ws ++ (xs ++ (ys ++ (zs ++ [])))
You perform O(m) steps to evaluate the composed function, then O(n) steps to evaluate the resulting expression, for a total complexity of O(m+n), which is asymptotically better than O(m*n). When the list is made up entirely of appends of singleton lists, m = n and you get O(2n) ~ O(n), which is asymptotically better than O(n2).
This trick works for any Monoid.
newtype RMonoid m = RMonoid (m -> m) -- "right-associative monoid"
toRM m = RMonoid (m <>)
fromRM (RMonoid f) = f mempty
instance Monoid m => Monoid (RMonoid m):
mempty = toRM mempty
RMonoid f `mappend` RMonoid g = RMonoid (f . g)
See also, for example, the Codensity monad, which applies this idea to monadic expressions built using (>>=) (rather than monoidal expressions built using (<>)).
I hope I have convinced you that (++) only causes a problem when used in a left-associative fashion. Now, you can easily write a list-like data structure for which append is strict in its right argument, and left-associative appends are therefore not a problem.
data Snoc a = Nil | Snoc (Snoc a) a
xs +++ Nil = xs
xs +++ (Snoc ys y) = Snoc (xs +++ ys) y
We recover O(1) WHNF of left-nested appends,
((ws +++ xs) +++ ys) +++ zs
case zs of
Nil -> (ws +++ xs) +++ ys
Snoc zs z -> Snoc ((ws +++ xs) +++ ys) +++ zs) z
but at the expense of slow right-nested appends.
ws +++ (xs +++ (ys +++ zs))
case (
case (
case zs of { Nil -> ys ; (Snoc zs z) -> Snoc (ys +++ zs) z }
) of { Nil -> xs ; (Snoc ys y) -> Snoc (xs +++ ys) y }
) of { Nil -> ws ; (Snoc xs x) -> Snoc (ws +++ xs) y }
Then, of course, you end up writing a new type of difference list which reassociates appends to the left!
newtype LMonoid m = LMonoid (m -> m) -- "left-associative monoid"
toLM m = LMonoid (<> m)
fromLM (LMonoid f) = f mempty
instance Monoid m => Monoid (LMonoid m):
mempty = toLM mempty
LMonoid f `mappend` LMonoid g = LMonoid (g . f)

Carl hit it in his comment. We can write
data TList a = Nil | Single a | Node !(TList a) (TList a)
singleton :: a -> TList a
singleton = Single
instance Monoid (TList a) where
mempty = Nil
mappend = Node
We could get toList by just deriving Foldable, but let's write it out instead to see just what's going on.
instance Foldable TList where
foldMap _ Nil = mempty
foldMap f (Single a) = f a
foldMap f (Node t u) = foldMap f t <> foldMap f u
toList as0 = go as0 [] where
go Nil k = k
go (Single a) k = a : k
go (Node l r) k = go l (go r k)
toList is O(n), where n is the total number of internal nodes (i.e., the total number of mappend operations used to form the TList). This should be pretty clear: each Node is inspected exactly once. mempty, mappend, and singleton are each obviously O(1).
This is exactly the same as for a DList:
newtype DList a = DList ([a] -> [a])
singletonD :: a -> DList a
singletonD a = DList (a:)
instance Monoid (DList a) where
mempty = DList id
mappend (DList f) (DList g) = DList (f . g)
instance Foldable DList where
foldr c n xs = foldr c n (toList xs)
toList (DList f) = f []
Why, operationally, is this the same? Because, as you indicate in your question, the functions are represented in memory as trees. And they're represented as trees that look a lot like TLists! singletonD x produces a closure containing a (boring) (:) and an exciting x. When applied, it does O(1) work. mempty just produces the id function, which when applied does O(1) work. mappend as bs produces a closure that, when applied, does O(1) work of its own, plus O(length as + length bs) work in its children.
The shapes of the trees produced for TList and DList are actually the same. You should be able to convince yourself that they also have identical asymptotic performance when used incrementally: in each case, the program has to walk down the left spine of the tree to get to the first list element.
Both DList and TList are equally okay when built up and then used only once. They're equally lousy when built once and converted to lists multiple times.
As Will Ness showed with a similar type, the explicit tree representation is better if you want to add support for deconstructing the representation, as you can actually get your hands on the structure. TList can support a reasonably efficient uncons operation (that improves the structure as it works). To get efficient unsnoc as well, you'll need to use a fancier representation (a catenable deque). This implementation also has potentially wretched cache performance. You can switch to a cache-oblivious data structure, but it is practically guaranteed to be complicated.

As shown in this answer the trick is the rearrangement of (.) tree into the ($) list on access.
We can emulate this with
data Dlist a = List [a]
| Append (Dlist a) (Dlist a)
which will juggle the Append nodes, rotating the tree to the right, to push the leftmost node up so it becomes the uppermost-left, on the first access, after which the next tail(**) operation becomes O(1) (*) :
let x = (List [1..10] `Append` List [11..20]) `Append` List [21..30]
and tail x(**) is to produce
List [2..10] `Append` (List [11..20] `Append` List [21..30])
tail(**) should be trivial to implement. Of course if will only pattern match the leftmost List node's list (with (x:xs)) when that node is finally discovered, and won't touch contents of anything else inside the Append nodes as they are juggled. Laziness is thus naturally preserved.
(**) 2020 edit: what this actually means is to have one uncons :: Dlist a -> (a, Dlist a) operation producing the head and the new, rotated tail, simultaneously, so that uncons on the new tail is O(1).(*)
(*) edit: O(1) in case the leftmost List node's list isn't empty. Overall, taking into account the possible nested-on-the-left Append nodes that will have to be rearranged as they come to the fore after the first leftmost List node is exhausted, the access to all n elements of the result will be O(n+m), where m is the number of empty lists.
update: A historical note: this is actually quite similar (if not exactly the same) as the efficient tree-fringe enumeration problem, dealt with in 1977 by John McCarthy, whose gopher function did exactly the same kind of nodes re-arrangement (tree rotations to the right) as the proposed here tail(**) would.

Related

Can `foldr` and `foldl` be defined in terms of each other?

Can foldr and foldl be defined in terms of each other?
Programming in Haskell by Hutton says
What do we need to define manually? The minimal complete definition for an instance of the
Foldable class is to define either foldMap or foldr, as all other functions in the class can be derived
from either of these two using the default definitions and the instance for lists.
So how can foldl be defined in terms of foldr?
Can foldr be defined in terms of foldl, so that we can define a Foldable type by defining foldl?
Why is it that in Foldable, fold is defined in terms of foldMap which is defined in terms of foldr, while in list foldable, some specializations of fold are defined in terms of foldl as:
maximum :: Ord a => [a] -> a
maximum = foldl max
minimum :: Ord a => [a] -> a
minimum = foldl min
sum :: Num a => [a] -> a
sum = foldl (+) 0
product :: Num a => [a] -> a
product = foldl (*) 1
? Can they be rewritten as
maximum :: Ord a => [a] -> a
maximum = foldr max
minimum :: Ord a => [a] -> a
minimum = foldr min
sum :: Num a => [a] -> a
sum = foldr (+) 0
product :: Num a => [a] -> a
product = foldr (*) 1
Thanks.
In general, neither foldr nor foldl can be implemented in terms of each other. The core operation of Foldable is foldMap, from which all the other operations may be derived. Neither foldr nor foldl are enough. However, the difference only shines through in the case of infinite or (partially) undefined structures, so there's a tendency to gloss over this fact.
#DamianLattenero has shown the "implementations" of foldl and foldr in terms of one another:
foldl' c = foldr (flip c)
foldr' c = foldl (flip c)
But they do not always have the correct behavior. Consider lists. Then, foldr (:) [] xs = xs for all xs :: [a]. However, foldr' (:) [] /= xs for all xs, because foldr' (:) [] xs = foldl (flip (:)) n xs, and foldl (in the case of lists) has to walk the entire spine of the list before it can produce an output. But, if xs is infinite, foldl can't walk the entire infinite list, so foldr' (:) [] xs loops forever for infinite xs, while foldr (:) [] xs just produces xs. foldl' = foldl as desired, however. Essentially, for [], foldr is "natural" and foldl is "unnatural". Implementing foldl with foldr works because you're just losing "naturalness", but implementing foldr in terms of foldl doesn't work, because you cannot recover that "natural" behavior.
On the flipside, consider
data Tsil a = Lin | Snoc (Tsil a) a
-- backwards version of data [a] = [] | (:) a [a]
In this case, foldl is natural:
foldl c n Lin = n
foldl c n (Snoc xs x) = c (foldl c n xs) x
And foldr is unnatural:
foldr c = foldl (flip c)
Now, foldl has the good, "productive" behavior on infinite/partially undefined Tsils, while foldr does not. Implementing foldr in terms of foldl works (as I just did above), but you cannot implement foldl in terms of foldr, because you cannot recover that productivity.
foldMap avoids this issue. For []:
foldMap f [] = mempty
foldMap f (x : xs) = f x <> foldMap f xs
-- foldMap f = foldr (\x r -> f x <> r) mempty
And for Tsil:
foldMap f Lin = mempty
foldMap f (Snoc xs x) = foldMap f xs <> f x
-- foldMap f = foldl (\r x -> r <> f x) mempty
Now,
instance Semigroup [a] where
[] <> ys = ys
(x : xs) <> ys = x : (xs <> ys)
-- (<>) = (++)
instance Monoid [a] where mempty = []
instance Semigroup (Tsil a) where
ys <> Lin = ys
ys <> (Snoc xs x) = Snoc (ys <> xs) x
instance Monoid (Tsil a) where mempty = Lin
And we have
foldMap (: []) xs = xs -- even for infinite xs
foldMap (Snoc Lin) xs = xs -- even for infinite xs
Implementations for foldl and foldr are actually given in the documentation
foldr f z t = appEndo (foldMap (Endo . f) t ) z
foldl f z t = appEndo (getDual (foldMap (Dual . Endo . flip f) t)) z
f is used to turn each a in the t a into a b -> b (Endo b), and then all the b -> bs are composed together (foldr does it one way, while foldl composes them backwards with Dual (Endo b)) and the final b -> b is then applied to the initial value z :: b.
foldr is replaced with foldl in specializations sum, minimum, etc. in the instance Foldable [], for performance reasons. The idea is that you can't take the sum of an infinite list anyway (this assumption is false, but it's generally true enough), so we don't need foldr to handle it. Using foldl is, in some cases, more performant than foldr, so foldr is changed to foldl. I would expect, for Tsil, that foldr is sometimes more performant than foldl, and therefore sum, minimum, etc. can be reimplemented in terms of foldr, instead of fold in order to get that performance improvement. Note that the documentation says that sum, minimum, etc. should be equivalent to the forms using foldMap/fold, but may be less defined, which is exactly what would happen.
Bit of an appendix, but I think it's worth noticing that:
genFoldr c n [] = n; genFoldr c n (x : xs) = c x (genFoldr c n xs)
instance Foldable [] where
foldl c = genFoldr (flip c)
foldr c = foldl (flip c)
-- similarly for Tsil
is actually a valid, lawful Foldable instance, where both foldr and foldl are unnatural and neither can handle infinite structures (foldMap is defaulted in terms of foldr, and thus won't handle infinite lists either). In this case, foldr and foldl can be written in terms of each other (foldl c = foldr (flip c), though it is implemented with genFoldr). However, this instance is undesirable, because we would really like a foldr that can handle infinite lists, so we instead implement
instance Foldable [] where
foldr = genFoldr
foldl c = foldr (flip c)
where the equality foldr c = foldl (flip c) no longer holds.
Here's a type for which neither foldl nor foldr can be implemented in terms of the other:
import Data.Functor.Reverse
import Data.Monoid
data DL a = DL [a] (Reverse [] a)
deriving Foldable
The Foldable implementation looks like
instance Foldable DL where
foldMap f (DL front rear) = foldMap f front <> foldMap f rear
Inlining the Foldable instance for Reverse [], and adding the corresponding foldr and foldl,
foldMap f (DL front rear) = foldMap f front <> getDual (foldMap (Dual . f) (getReverse rear))
foldr c n (DL xs (Reverse ys)) =
foldr c (foldl (flip c) n ys) xs
foldl f b (DL xs (Reverse ys)) =
foldr (flip f) (foldl f b xs) ys
If the front list is infinite, then foldr defined using foldl won't work. If the rear list is infinite, then foldl defined using foldr won't work.
In the case of lists: foldl can be defined in terms of foldr but not vice-versa.
foldl f a l = foldr (\b e c -> e (f c b)) id l a
For other types which implement Foldable: the opposite may be true.
Edit 2:
There is another way also that satisfy (based on this article) for foldl:
foldl f a list = (foldr construct (\acc -> acc) list) a
where
construct x r = \acc -> r (f acc x)
Edit 1
Flipping the arguments of the function will not create a same foldr/foldl, meaning this examples does not satisfy the equality of foldr-foldl:
foldr :: Foldable t => (a -> b -> b) -> b -> t a -> b
and foldl in terms of foldr:
foldl' :: Foldable t => (b -> a -> b) -> b -> t a -> b
foldl' f b = foldr (flip f) b
and foldr:
foldr' :: Foldable t => (a -> b -> b) -> b -> t a -> b
foldr' f b = foldl (flip f) b
The converse is not true, since foldr may work on infinite lists, which foldl variants never can do. However, for finite lists, foldr can also be written in terms of foldl although losing laziness in the process. (for more check here)
Ando also not satisfy this examples:
foldr (-) 2 [8,10] = 8 - (10 - 2) == 0
foldl (flip (-)) 2 [8,10] = (flip (-) (flip (-) 2 8) 10) == 4

Haskell: Is it true that function application distributes over list concatenation?

After reading this question: Functional proofs (Haskell)
And after looking at the inductive proof of forall xs ys. length (xs ++ ys) = length xs + length ys from the Haskell School of Music (page 164).
It seemed to me that function application distributes over list concatenation.
Hence the more general law might be that forall f xs ys. f (xs ++ ys) = f xs ++ f ys.
But how would one prove/disprove such a predicate?
-- EDIT --
I made a typo it was meant to be: forall f xs ys. f (xs ++ ys) = f xs + f ys, which matches what the previous question and the Haskell SoM uses. That being said, because of this typo, it's no longer "distributivity" property. However, #leftaroundabout made the correct answer for my original typoed question. And as for my intended question, the law is still not correct, because functions don't need the preserve the structural value. The f might give a completely different answer depending on the length of the list it is applied to.
No, this is clearly not true in general:
f [_] = []
f l = l
then
f ([1] ++ [2]) = f [1,2] = [1,2]
but
f [1] ++ f [2] = [] ++ [] = []
I'm sure the functions which do have this problem form an interesting class, but general functions can do pretty much anything to a list's structure which thwarts such invariants.
And after looking at the inductive proof of forall xs ys. length (xs ++ ys) = length xs + length ys from the Haskell School of Music (page 164).
It seemed to me that function application distributes over list concatenation.
Well, clearly that is not the case. For example:
reverse ([1..3] ++ [4..6]) /= reverse [1..3] ++ reverse [4..6]
The example that you're quoting is a special case that's called a monoid morphism: a function f :: m -> n such that:
m and n are monoids with binary operation <> and identity mempty;
f mempty = mempty
f (m <> m') == f m <> f m'
So length :: [a] -> Int is a monoid morphism, sending [] to 0 and ++ to +:
length [] = 0
length (xs ++ ys) = length xs + length ys

It it always safe to use unsafeCoerce for valid equalities?

I have a witness type for type-level lists,
data List xs where
Nil :: List '[]
Cons :: proxy x -> List xs -> List (x ': xs)
as well as the following utilities.
-- Type level append
type family xs ++ ys where
'[] ++ ys = ys
(x ': xs) ++ ys = x ': (xs ++ ys)
-- Value level append
append :: List xs -> List ys -> List (xs ++ ys)
append Nil ys = ys
append (Cons x xs) ys = Cons x (append xs ys)
-- Proof of associativity of (++)
assoc :: List xs -> proxy ys -> proxy' zs -> ((xs ++ ys) ++ zs) :~: (xs ++ (ys ++ zs))
assoc Nil _ _ = Refl
assoc (Cons _ xs) ys zs = case assoc xs ys zs of Refl -> Refl
Now, I have two different but equivalent definitions of a type-level reverse function,
-- The first version, O(n)
type Reverse xs = Rev '[] xs
type family Rev acc xs where
Rev acc '[] = acc
Rev acc (x ': xs) = Rev (x ': acc) xs
-- The second version, O(n²)
type family Reverse' xs where
Reverse' '[] = '[]
Reverse' (x ': xs) = Reverse' xs ++ '[x]
The first is more efficient, but the second is easier to use when proving things to the compiler, so it would be nice to have a proof of equivalence. In order to do this, I need a proof of Rev acc xs :~: Reverse' xs ++ acc. This is what I came up with:
revAppend :: List acc -> List xs -> Rev acc xs :~: Reverse' xs ++ acc
revAppend _ Nil = Refl
revAppend acc (Cons x xs) =
case (revAppend (Cons x acc) xs, assoc (reverse' xs) (Cons x Nil) acc) of
(Refl, Refl) -> Refl
reverse' :: List xs -> List (Reverse' xs)
reverse' Nil = Nil
reverse' (Cons x xs) = append (reverse' xs) (Cons x Nil)
Unfortunately, revAppend is O(n³), which completely defeats the purpose of this exercise. However, we can bypass all this and get O(1) by using unsafeCoerce:
revAppend :: Rev acc xs :~: Reverse' xs ++ acc
revAppend = unsafeCoerce Refl
Is this safe? What about the general case? For example, if I have two type families F :: k -> * and G :: k -> *, and I know that they are equivalent, is it safe to define the following?
equal :: F a :~: G a
equal = unsafeCoerce Refl
It would be very nice if GHC used a termination checker on expressions e::T where T has only one constructor with no arguments K (e.g. :~:, ()). When the check succeeds, GHC could rewrite e as K skipping the computation completely. You would have to rule out FFI, unsafePerformIO, trace, ... but it seems feasible. If this were implemented, it would solve the posted question very nicely, allowing one to actually write proofs having zero runtime cost.
Failing this, you can use unsafeCoerce in the meanwhile, as you propose. If you are really, really sure that two type are the same you can use it safely. The typical example is implementing Data.Typeable. Of course, a misuse of unsafeCoerce on different types would lead to unpredictable effects, hopefully a crash.
You could even write your own "safer" variant of unsafeCoerce:
unsafeButNotSoMuchCoerce :: (a :~: b) -> a -> b
#ifdef CHECK_TYPEEQ
unsafeButNotSoMuchCoerce Refl = id
#else
unsafeButNotSoMuchCoerce _ = unsafeCoerce
#endif
If CHECK_TYPEEQ is defined it leads to slower code. If undefined, it skips it and coerces at zero cost. In the latter case it is still unsafe because you can pass bottom as the first arg and the program will not loop but will instead perform the wrong coercion. In this way you can test your program with the safe but slow mode, and then turn to the unsafe mode and pray your "proofs" were always terminating.

Restriction on the data type definition

I have a type synonym type Entity = ([Feature], Body) for whatever Feature and Body mean. Objects of Entity type are to be grouped together:
type Bunch = [Entity]
and the assumption, crucial for the algorithm working with Bunch, is that any two entities in the same bunch have the equal number of features.
If I were to implement this constraint in an OOP language, I would add the corresponding check to the method encapsulating the addition of entities into a bunch.
Is there a better way to do it in Haskell? Preferably, on the definition level. (If the definition of Entity also needs to be changed, no problem.)
Using type-level length annotations
So here's the deal. Haskell does have type-level natural numbers and you can annotate with types using "phantom types". However you do it, the types will look like this:
data Z
data S n
data LAList x len = LAList [x] -- length-annotated list
Then you can add some construction functions for convenience:
lalist1 :: x -> LAList x (S Z)
lalist1 x = LAList [x]
lalist2 :: x -> x -> LAList x (S (S Z))
lalist2 x y = LAList [x, y]
-- ...
And then you've got more generic methods:
(~:) :: x -> LAList x n -> LAList x (S n)
x ~: LAList xs = LAList (x : xs)
infixr 5 ~:
nil :: LAList x Z
nil = LAList []
lahead :: LAList x (S n) -> x
lahead (LAList xs) = head xs
latail :: LAList x (S n) -> LAList x n
latail (LAList xs) = tail xs
but by itself the List definition doesn't have any of this because it's complicated. You may be interested in the Data.FixedList package for a somewhat different approach, too. Basically every approach is going to start off looking a little weird with some data type that has no constructor, but it starts to look normal after a little bit.
You might also be able to get a typeclass so that all of the lalist1, lalist2 operators above can be replaced with
class FixedLength t where
la :: t x -> LAList x n
but you will probably need the -XTypeSynonymInstances flag to do this, as you want to do something like
type Pair x = (x, x)
instance FixedLength Pair where
la :: Pair x -> LAList [x] (S (S Z))
la (a, b) = LAList [a, b]
(it's a kind mismatch when you go from (a, b) to Pair a).
Using runtime checking
You can very easily take a different approach and encapsulate all of this as a runtime error or explicitly model the error in your code:
-- this may change if you change your definition of the Bunch type
features :: Entity -> [Feature]
features = fst
-- we also assume a runBunch :: [Entity] -> Something function
-- that you're trying to run on this Bunch.
allTheSame :: (Eq x) => [x] -> Bool
allTheSame (x : xs) = all (x ==) xs
allTheSame [] = True
permissiveBunch :: [Entity] -> Maybe Something
permissiveBunch es
| allTheSame (map (length . features) es) = Just (runBunch es)
| otherwise = Nothing
strictBunch :: [Entity] -> Something
strictBunch es
| allTheSame (map (length . features) es) = runBunch es
| otherwise = error ("runBunch requires all feature lists to be the same length; saw instead " ++ show (map (length . features) es))
Then your runBunch can just assume that all the lengths are the same and it's explicitly checked for above. You can get around pattern-matching weirdnesses with, say, the zip :: [a] -> [b] -> [(a, b)] function in the Prelude, if you need to pair up the features next to each other. (The goal here would be an error in an algorithm due to pattern-matching for both runBunch' (x:xs) (y:ys) and runBunch' [] [] but then Haskell warns that there are 2 patterns which you've not considered in the match.)
Using tuples and type classes
One final way to do it which is a compromise between the two (but makes for pretty good Haskell code) involves making Entity parametrized over all features:
type Entity x = (x, Body)
and then including a function which can zip different entities of different lengths together:
class ZippableFeatures z where
fzip :: z -> z -> [(Feature, Feature)]
instance ZippableFeatures () where
fzip () () = []
instance ZippableFeatures Feature where
fzip f1 f2 = [(f1, f2)]
instance ZippableFeatures (Feature, Feature) where
fzip (a1, a2) (b1, b2) = [(a1, b1), (a2, b2)]
Then you can use tuples for your feature lists, as long as they don't get any larger than the maximum tuple length (which is 15 on my GHC). If you go larger than that, of course, you can always define your own data types, but it's not going to be as general as type-annotated lists.
If you do this, your type signature for runBunch will simply look like:
runBunch :: (ZippableFeatures z) => [Entity z] -> Something
When you run it on things with the wrong number of features you'll get compiler errors that it can't unify the type (a, b) with (a, b, c).
There are various ways to enforce length constraints like that; here's one:
{-# LANGUAGE DataKinds, KindSignatures, GADTs, TypeFamilies #-}
import Prelude hiding (foldr)
import Data.Foldable
import Data.Monoid
import Data.Traversable
import Control.Applicative
data Feature -- Whatever that really is
data Body -- Whatever that really is
data Nat = Z | S Nat -- Natural numbers
type family Plus (m::Nat) (n::Nat) where -- Type level natural number addition
Plus Z n = n
Plus (S m) n = S (Plus m n)
data LList (n :: Nat) a where -- Lists tagged with their length at the type level
Nil :: LList Z a
Cons :: a -> LList n a -> LList (S n) a
Some functions on these lists:
llHead :: LList (S n) a -> a
llHead (Cons x _) = x
llTail :: LList (S n) a -> LList n a
llTail (Cons _ xs) = xs
llAppend :: LList m a -> LList n a -> LList (Plus m n) a
llAppend Nil ys = ys
llAppend (Cons x xs) ys = Cons x (llAppend xs ys)
data Entity n = Entity (LList n Feature) Body
data Bunch where
Bunch :: [Entity n] -> Bunch
Some instances:
instance Functor (LList n) where
fmap f Nil = Nil
fmap f (Cons x xs) = Cons (f x) (fmap f xs)
instance Foldable (LList n) where
foldMap f Nil = mempty
foldMap f (Cons x xs) = f x `mappend` foldMap f xs
instance Traversable (LList n) where
traverse f Nil = pure Nil
traverse f (Cons x xs) = Cons <$> f x <*> traverse f xs
And so on. Note that n in the definition of Bunch is existential. It can be anything, and what it actually is doesn't affect the type—all bunches have the same type. This limits what you can do with bunches to a certain extent. Alternatively, you can tag the bunch with the length of its feature lists. It all depends what you need to do with this stuff in the end.

Implementing a zipper for length-indexed lists

I'm trying to implement a kind of zipper for length-indexed lists which would return each item of the list paired with a list where that element is removed. E.g. for ordinary lists:
zipper :: [a] -> [(a, [a])]
zipper = go [] where
go _ [] = []
go prev (x:xs) = (x, prev ++ xs) : go (prev ++ [x]) xs
So that
> zipper [1..5]
[(1,[2,3,4,5]), (2,[1,3,4,5]), (3,[1,2,4,5]), (4,[1,2,3,5]), (5,[1,2,3,4])]
My current attempt at implementing the same thing for length-indexed lists:
{-# LANGUAGE GADTs #-}
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE KindSignatures #-}
{-# LANGUAGE TypeOperators #-}
{-# LANGUAGE TypeFamilies #-}
data Nat = Zero | Succ Nat
type One = Succ Zero
type family (+) (a :: Nat) (b :: Nat) :: Nat
type instance (+) Zero n = n
type instance (+) (Succ n) m = Succ (n + m)
data List :: Nat -> * -> * where
Nil :: List Zero a
Cons :: a -> List size a -> List (Succ size) a
single :: a -> List One a
single a = Cons a Nil
cat :: List a i -> List b i -> List (a + b) i
cat Nil ys = ys
cat (Cons x xs) ys = Cons x (xs `cat` ys)
zipper :: List (Succ n) a -> List (Succ n) (a, List n a)
zipper = go Nil where
go :: (p + Zero) ~ p
=> List p a -> List (Succ q) a -> List (Succ q) (a, List (p + q) a)
go prev (Cons x Nil) = single (x, prev)
go prev (Cons x xs) = (x, prev `cat` xs) `Cons` go (prev `cat` single x) xs
This feels like it should be rather straightforward, but as there doesn't seem to be any way to convey to GHC that e.g. + is commutative and associative or that zero is the identity, I'm running into lots of problems where the type checker (understandably) complains that it cannot determine that a + b ~ b + a or that a + Zero ~ a.
Do I need to add some sort of proof objects (data Refl a b where Refl :: Refl a a et al.) or is there some way to make this work with just adding more explicit type signatures?
Alignment
Dependently typed programming is like doing two jigsaws which some rogue has glued together. Less metaphorically, we express simultaneous computations at the value level and at the type level, and we must ensure their compatibility. Of course, we are each our own rogue, so if we can arrange for the jigsaws to be glued in alignment, we shall have an easier time of it. When you see proof obligations for type repair, you might be tempted to ask
Do I need to add some sort of proof objects (data Refl a b where Refl :: Refl a a et al.) or is there some way to make this work with just adding more explicit type signatures?
But you might first consider in what way the value- and type-level computations are out of alignment, and whether there is any hope to bring them closer.
A Solution
The question here is how to compute the vector (length-indexed list) of selections from a vector. So we'd like something with type
List (Succ n) a -> List (Succ n) (a, List n a)
where the element in each input position gets decorated with the one-shorter vector of its siblings. The proposed method is to scan left-to-right, accumulating the elder siblings in a list which grows on the right, then concatenate with the younger siblings at each position. Growing lists on the right is always a worry, especially when the Succ for the length is aligned to the Cons on the left. The need for concatenation necessitates type-level addition, but the arithmetic resulting from right-ended activity is out of alignment with the computation rules for addition. I'll get back to this style in a bit, but let's try thinking it out again.
Before we get into any accumulator-based solution, let's just try bog standard structural recursion. We have the "one" case and the "more" case.
picks (Cons x xs#Nil) = Cons (x, xs) Nil
picks (Cons x xs#(Cons _ _)) = Cons (x, xs) (undefined (picks xs))
In both cases, we put the first decomposition at the front. In the second case, we have checked that the tail is nonempty, so we can ask for its selections. We have
x :: a
xs :: List (Succ n) a
picks xs :: List (Succ n) (a, List n a)
and we want
Cons (x, xs) (undefined (picks xs)) :: List (Succ (Succ n)) (a, List (Succ n) a)
undefined (picks xs) :: List (Succ n) (a, List (Succ n) a)
so the undefined needs to be a function which grows all the sibling lists by reattaching x at the left end (and left-endedness is good). So, I define the Functor instance for List n
instance Functor (List n) where
fmap f Nil = Nil
fmap f (Cons x xs) = Cons (f x) (fmap f xs)
and I curse the Prelude and
import Control.Arrow((***))
so that I can write
picks (Cons x xs#Nil) = Cons (x, xs) Nil
picks (Cons x xs#(Cons _ _)) = Cons (x, xs) (fmap (id *** Cons x) (picks xs))
which does the job with not a hint of addition, let alone a proof about it.
Variations
I got annoyed about doing the same thing in both lines, so I tried to wriggle out of it:
picks :: m ~ Succ n => List m a -> List m (a, List n a) -- DOESN'T TYPECHECK
picks Nil = Nil
picks (Cons x xs) = Cons (x, xs) (fmap (id *** (Cons x)) (picks xs))
But GHC solves the constraint aggressively and refuses to allow Nil as a pattern. And it's correct to do so: we really shouldn't be computing in a situation where we know statically that Zero ~ Succ n, as we can easily construct some segfaulting thing. The trouble is just that I put my constraint in a place with too global a scope.
Instead, I can declare a wrapper for the result type.
data Pick :: Nat -> * -> * where
Pick :: {unpick :: (a, List n a)} -> Pick (Succ n) a
The Succ n return index means the nonemptiness constraint is local to a Pick. A helper function does the left-end extension,
pCons :: a -> Pick n a -> Pick (Succ n) a
pCons b (Pick (a, as)) = Pick (a, Cons b as)
leaving us with
picks' :: List m a -> List m (Pick m a)
picks' Nil = Nil
picks' (Cons x xs) = Cons (Pick (x, xs)) (fmap (pCons x) (picks' xs))
and if we want
picks = fmap unpick . picks'
That's perhaps overkill, but it might be worth it if we want to separate older and younger siblings, splitting lists in three, like this:
data Pick3 :: Nat -> * -> * where
Pick3 :: List m a -> a -> List n a -> Pick3 (Succ (m + n)) a
pCons3 :: a -> Pick3 n a -> Pick3 (Succ n) a
pCons3 b (Pick3 bs x as) = Pick3 (Cons b bs) x as
picks3 :: List m a -> List m (Pick3 m a)
picks3 Nil = Nil
picks3 (Cons x xs) = Cons (Pick3 Nil x xs) (fmap (pCons3 x) (picks3 xs))
Again, all the action is left-ended, so we're fitting nicely with the computational behaviour of +.
Accumulating
If we want to keep the style of the original attempt, accumulating the elder siblings as we go, we could do worse than to keep them zipper-style, storing the closest element in the most accessible place. That is, we can store the elder siblings in reverse order, so that at each step we need only Cons, rather than concatenating. When we want to build the full sibling list in each place, we need to use reverse-concatenation (really, plugging a sublist into a list zipper). You can type revCat easily for vectors if you deploy the abacus-style addition:
type family (+/) (a :: Nat) (b :: Nat) :: Nat
type instance (+/) Zero n = n
type instance (+/) (Succ m) n = m +/ Succ n
That's the addition which is in alignment with the value-level computation in revCat, defined thus:
revCat :: List m a -> List n a -> List (m +/ n) a
revCat Nil ys = ys
revCat (Cons x xs) ys = revCat xs (Cons x ys)
We acquire a zipperized go version
picksr :: List (Succ n) a -> List (Succ n) (a, List n a)
picksr = go Nil where
go :: List p a -> List (Succ q) a -> List (Succ q) (a, List (p +/ q) a)
go p (Cons x xs#Nil) = Cons (x, revCat p xs) Nil
go p (Cons x xs#(Cons _ _)) = Cons (x, revCat p xs) (go (Cons x p) xs)
and nobody proved anything.
Conclusion
Leopold Kronecker should have said
God made the natural numbers to perplex us: all the rest is the work of man.
One Succ looks very like another, so it is very easy to write down expressions which give the size of things in a way which is out of alignment with their structure. Of course, we can and should (and are about to) equip GHC's constraint solver with improved kit for type-level numerical reasoning. But before that kicks in, it's worth just conspiring to align the Conses with the Succs.

Resources