List based on right Kan extension - haskell

In the ``Kan Extensions for Program Optimisation'' by Ralf Hinze there is the definition of List type based on right Kan extension of the forgetful functor from the category of monoids along itself (section 7.4). The paper gives Haskell implementation as follows:
newtype List a = Abstr {
apply :: forall z . (Monoid z) => (a -> z) -> z
}
I was able to define usual nil and cons constructors:
nil :: List a
nil = Abstr (\f -> mempty)
cons :: a -> List a -> List a
cons x (Abstr app) = Abstr (\f -> mappend (f x) (app f))
With the following instance of Monoid class for Maybe functor, I managed to define head function:
instance Monoid (Maybe a) where
mempty = Nothing
mappend Nothing m = m
mappend (Just a) m = Just a
head :: List a -> Maybe a
head (Abstr app) = app Just
Question: How can one define tail function?

Here is a rather principled solution to implementing head and tail in one go (full gist):
First of all, we know how to append lists (it will be useful later on):
append :: List a -> List a -> List a
append (Abstr xs) (Abstr ys) = Abstr (\ f -> xs f <> ys f)
Then we introduce a new type Split which we will use to detect whether a List is empty or not (and get, in the case it's non empty, a head and a tail):
newtype Split a = Split { outSplit :: Maybe (a, List a) }
This new type forms a monoid: indeed we know how to append two lists.
instance Monoid (Split a) where
mempty = Split Nothing
mappend (Split Nothing) (Split nns) = Split nns
mappend (Split mms) (Split Nothing) = Split mms
mappend (Split (Just (m, ms))) (Split (Just (n, ns))) =
Split $ Just (m, append ms (cons n ns))
Which means that we can get a function from List a to Split a using List a's apply:
split :: List a -> Split a
split xs = apply xs $ \ a -> Split $ Just (a, nil)
head and tail can finally be trivially derived from split:
head :: List a -> Maybe a
head = fmap fst . outSplit . split
tail :: List a -> Maybe (List a)
tail = fmap snd . outSplit . split

This implementation of lists as free monoids is provided in the package fmlist, which notes some interesting properties of it (unlike most implementations of lists, which are right-biased, this one is truly unbiased; you can make an arbitrary tree, and although of course the monoid laws force you to see it as flattened, you can still observe some differences in the infinite case. This is almost a Haskell quirk -- usually, free monoids). It also has an implementation of tail, so that's sort of an answer to your question (but see below).
With these sorts of representations (not just this particular one one, but also e.g. forall r. (a -> r -> r) -> r -> r lists), there are usually some operations (e.g. appending) that become easier, and some (e.g. zip and tail) that become more difficult. This is discussed a bit in various places, e.g. How to take the tail of a functional stream.
Looking more closely at fmlist, though, its solution is pretty unsatisfactory: It just converts the nice balanced tree that you give it to a right-biased list using foldr, which allows it to do regular list operations, but loses the monoidal structure. The tail of a "middle-infinite" list is no longer "middle-infinite", it's just right-infinite like a regular list.
It should be possible to come up with a clever Monoid instance to compute the tail while disturbing the rest of the structure as little as possible, but an obvious one doesn't come to mind off-hand. I can think of a non-clever "brute force" solution, though: Cheat and reify the "list" into a tree using an invalid Monoid instance, inspect the tree, and then fold it back up so the end result is valid. Here's what it would look like with my nonfree package and fmlist:
nail :: FM.FMList a -> FM.FMList a
nail (FM.FM k) = FM.FM $ \f -> foldMap f (nail' (k N))
nail' :: N a -> N a
nail' NEmpty = error "nail' NEmpty"
nail' (N x) = NEmpty
nail' (NAppend l r) =
case normalize l of
NEmpty -> nail' r
N x -> r
l' -> NAppend (nail' l') r
-- Normalize a tree so that the left side of a root NAppend isn't an empty
-- subtree of any shape. If the tree is infinite in a particular way, this
-- won't terminate, so in that sense taking the tail of a list can make it
-- slightly worse (but you were already in pretty bad shape as far as
-- operations on the left side are concerned, and this is a pathological case
-- anyway).
normalize :: N a -> N a
normalize (NAppend l r) =
case normalize l of
NEmpty -> normalize r
l' -> NAppend l' r
normalize n = n

Related

How to avoid infinite loop in zipWith a self reference?

I'd like to create a list data structure that can zipWith that has a better behavior with self reference. This is for an esoteric language that will rely on self reference and laziness to be Turing complete using only values (no user functions). I've already created it, called Atlas but it has many built ins, I'd like to reduce that and be able to compile/interpret in Haskell.
The issue is that zipWith checks if either list is empty and returns empty. But in the case that this answer depends on the result of zipWith then it will loop infinitely. Essentially I'd like it to detect this case and have faith that the list won't be empty. Here is an example using DList
import Data.DList
import Data.List (uncons)
zipDL :: (a->b->c) -> DList a -> DList b -> DList c
zipDL f a b = fromList $ zipL f (toList a) (toList b)
zipL :: (a->b->c) -> [a] -> [b] -> [c]
zipL _ [] _ = []
zipL _ _ [] = []
zipL f ~(a:as) ~(b:bs) = f a b : zipL f as bs
a = fromList [5,6,7]
main=print $ dh where
d = zipDL (+) a $ snoc (fromList dt) 0
~(Just (dh,dt)) = uncons $ toList d
This code would sum the list 5,6,7 except for the issue. It can be fixed by removing zipL _ _ [] = [] because then it assumes that the result won't be empty and then it in fact turns out not to be empty. But this is a bad solution because we can't always assume that it is the second list that could have the self reference.
Another way of explaining it is if we talk about the sizes of these list.
The size of zip a b = min (size a) (size b)
So in this example: size d = min (size a) (size d-1+1)
But there in lies the problem, if the size of d is 0, then the size of d = 0, but if size of d is 1 the size is 1, however once the size of d is said to be greater than size of a, then the size would be a, which is a contradiction. But any size 0-a works which means it is undefined.
Essentially I want to detect this case and make the size of d = a.
So far the only thing I have figured out is to make all lists lists of Maybe, and terminate lists with a Nothing value. Then in the application of the zipWith binary function return Nothing if either value is Nothing. You can then take out both of the [] checks in zip, because you can think of all lists as being infinite. Finally to make the summation example work, instead of doing a snoc, do a map, and replace any Nothing value with the snoc value. This works because when checking the second list for Nothing, it can lazily return true, since no value of the second list can be nothing.
Here is that code:
import Data.Maybe
data L a = L (Maybe a) (L a)
nil :: L a
nil = L Nothing nil
fromL :: [a] -> L a
fromL [] = nil
fromL (x:xs) = L (Just x) (fromL xs)
binOpMaybe :: (a->b->c) -> Maybe a -> Maybe b -> Maybe c
binOpMaybe f Nothing _ = Nothing
binOpMaybe f _ Nothing = Nothing
binOpMaybe f (Just a) (Just b) = Just (f a b)
zip2W :: (a->b->c) -> L a -> L b -> L c
zip2W f ~(L a as) ~(L b bs) = L (binOpMaybe f a b) (zip2W f as bs)
unconsL :: L a -> (Maybe a, Maybe (L a))
unconsL ~(L a as) = (a, Just as)
mapOr :: a -> L a -> L a
mapOr v ~(L a as) = L (Just $ fromMaybe v a) $ mapOr v as
main=print $ h
where
a = fromL [4,5,6]
b = zip2W (+) a (mapOr 0 (fromJust t))
(h,t) = unconsL $ b
The downside to this approach is it needs this other operator to map with Just . fromMaybe initialvalue. This is a less intuitive operator than ++. And without it the language could be built entirely on ++ uncons and (:[]) which would be pretty neat.
The other thing I've figured out is in the current ruby implementation to throw an error when a value depends on itself, and catch it in the empty list detection. But this is vary hacky and not entirely sound, although it does work for cases like this. I don't think this can work in Haskell since I don't think you can detect self dependence?
Sorry for the long description and the very odd use case. I've spent tons of time thinking about this, but haven't solved it yet and can't explain it any more succinctly! Not expecting an answer but figured it is worth a shot, thanks for considering.
EDIT:
After seeing it framed as a greatest fixed point question, it seems like a poor question because there is no efficient general solution to such a problem. For example, suppose the code was b = zipWith (+) a (if length b < 1 then [1] else []).
For my purposes it could still be nice to handle some cases correctly - the example provided does have a solution. So I could reframe the question as: when can we find the greatest fixed point efficiently and what is that fixed point? But I believe there is no simple answer to such a question, and so it would be a poor basis for a programming language to rely on ad hoc rules.
Sounds like you want a greatest fixed point. I'm not sure I've seen this done before, but maybe it's possible to make a sensible type class for types that support those.
class GF a where gfix :: (a -> a) -> a
instance GF a => GF [a] where
gfix f = case (f (repeat undefined), f []) of
(_:_, _) -> b:bs where
b = gfix (\a' -> head (f (a':bs)))
bs = gfix (\as' -> tail (f (b:as')))
([], []) -> []
_ -> error "no fixed point greater than bottom exists"
-- use the usual least fixed point. this ain't quite right, but
-- it works for this example, and maybe it's Good Enough
instance GF Int where gfix f = let x = f x in x
Try it out in ghci:
> gfix (\xs -> zipWith (+) [5,6,7] (tail xs ++ [0])) :: [Int]
[18,13,7]
This implementation isn't particularly efficient; e.g. replacing [5,6,7] with [1..n] results in a runtime that's quadratic in n. Perhaps with some cleverness that can be improved, but it's not immediately obvious to me how that would go.
I have an answer for this specific case, not general.
appendRepeat :: a -> [a] -> [a]
appendRepeat v a = h : appendRepeat v t
where
~(h,t) =
if null a
then (v,[])
else (head a,tail a)
a = [4,5,6]
main=print $ head b
where
b = zipWith (+) a $ appendRepeat 0 (tail b)
appendRepeat adds a an infinite list of a repeated value to the end of a list. But the key thing about it is it doesn't check if list is empty or not when deciding that it is returning a non empty list where the tail is a recursive call. This way laziness never ends up in an infinite loop checking the zipWith _ [] case.
So this code works, and for the purposes of the original question, it can be used to convert the language to just using 2 simple functions (++ and :[]). But the interpreter would need to do some static analysis for appending a repeated value and replace it to using this special appendRepeat function (which can easily be done in Atlas). It seems hacky to only make this one implementation switcharoo, but that is all that is needed.

What is the connection between primitive recursion and catamorphisms?

Using the following catamorphism for natural numbers I can implement various arithmetic algorithms whithout having to deal with recursion:
cataNat :: b -> (b -> b) -> Natural -> b
cataNat zero succ = go
where
go n = if (n <= 0) then zero else succ (go (n - 1))
fib :: Natural -> Natural
fib = fst . cataNat (0, 1) (\(a, b) -> (b, a + b))
cataNat looks similar to primitive recursion to me. At least each application of it seems garuanteed to terminate, no matter which combination of zero and succ is provided. With each iteration the overall problem is decomposed by the smallest/simplest problem instance. So even if it is technically not primitive recursion it seems to be equally expressive. If this is true it would mean that a catamorphism is not enough to express general recursion. We would probably need a hylomorphism for that. Is my reasoning correct, that is, does the equivalence hold for any type of catamorphism, not just for natural numbers?
Primitive recursion corresponds directly to a paramorphism.
You're correct that a catamorphism has equivalent theoretical power to a paramorphism, but they can be different in important ways in operational terms. For an example, let's go to lists instead of Nats.
cata :: b -> (a -> b -> b) -> [a] -> b
cata = flip foldr -- I'm lazy, but this argument order makes a bit more sense for this example
para :: b -> (a -> [a] -> b -> b) -> [a] -> b
para z _ [] = z
para z f (x:xs) = f x xs (para z f xs)
-- Removes the first element from the list which is equal to the other argument
delete1 :: Eq a => a -> [a] -> [a]
delete1 x xs = cata (const []) (\el k found -> if not found && el == x then k True else el : k found) xs False
-- Removes the first element from the list which is equal to the other argument
delete2 :: Eq a => a -> [a] -> [a]
delete2 x xs = para [] (\el raw processed -> if el == x then raw else el : processed) xs
Look at how awkward delete1 is, compared to delete2. Not only do you have to contort your logic by making the result of cata a function, but there's a very real operational cost, too. You have to traverse everything in the list after finding a matching element, and re-create all the (:) constructors. That can have a noticeable cost in efficiency. In comparison, delete2, when it finds the target element, can just use the existing tail of the list for the remainder, without even looking at it. Of course, most uses of foldr (real world, not this example) don't produce a function and don't want access to the unprocessed tail of the list. For them, the catamorphism is going to be slightly more efficient simply because of passing around less data.
So in terms of theoretical power, they're equivalent. In operational terms, each has a use, though catamorphisms are a lot more common.
For some expansion of the idea in more general terms, see the recursion-schemes library. It uses a rather different-looking formulation of the idea so that it can abstract over data types with different shapes, instead of needing a different type for cata/para for each data type they can be applied to. But it really is just an alternate way of packing up the same ideas, and other kinds of morphisms are covered as well, including much more niche (or even possibly useless) ones.

Building a BST from a depth-first preorder list in Haskell more idiomatically

This submission to Programming Praxis gives an O(n) function that "undoes" a preorder traversal of a binary search tree, converting a list back into a tree. Supplying the missing data declaration:
data Tree a = Leaf | Branch {value::a, left::Tree a, right:: Tree a}
deriving (Eq, Show)
fromPreOrder :: Ord a => [a] -> Tree a
fromPreOrder [] = Leaf
fromPreOrder (a:as) = Branch a l (fromPreOrder bs)
where
(l,bs) = lessThan a as
lessThan n [] = (Leaf,[])
lessThan n all#(a:as)
| a >= n = (Leaf,all)
| otherwise = (Branch a l r,cs)
where (l,bs) = lessThan a as
(r,cs) = lessThan n bs
It's obvious that one constructor is added to the tree in each recursive step, which is key to its efficiency.
The only "problem" is that the list is threaded through the computation manually, which is not a terribly Haskellian way to do it and makes it a little harder to see that it is actually consumed element by element in a single-threaded manner.
I attempted to correct this using a state monad (prettified on Codepad):
import Control.Monad.State
data Tree a = Leaf
| Branch {root::a, left::Tree a, right::Tree a}
deriving (Eq,Show)
peek = State peek' where
peek' [] = (Nothing,[])
peek' a#(x:_) = (Just x,a)
pop = State pop' where
pop' [] = error "Tried to read past the end of the list"
pop' (_:xs) = ((),xs)
prebuild'::Ord a => State [a] (Tree a)
prebuild' = do
next <- peek
case next of
Nothing -> return Leaf
Just x -> do
pop
leftpart <- lessThan x
rightpart <- prebuild'
return (Branch x leftpart rightpart)
lessThan n = do
next <- peek
case next of
Nothing -> return Leaf
Just x -> do
if x < n
then do
pop
leftpart <- lessThan x
rightpart <- lessThan n
return (Branch x leftpart rightpart)
else
return Leaf
prebuild::Ord a => [a] -> Tree a
prebuild = evalState prebuild'
Unfortunately, this just looks obscenely messy, and doesn't seem any easier to reason about.
One thought I haven't been able to get anywhere with yet (in part because I don't have a deep enough understanding of the underlying concepts, quite likely): could I use a left fold over the list that builds a continuation that ultimately produces the tree? Would that be possible? Also, would it be anything short of insane?
Another thought was to write this as a tree unfold, but I don't think it's possible to do that efficiently; the list will end up being traversed too many times and the program will be O(n^2).
Edit
Taking things from another direction, I have the nagging suspicion that it might be possible to come up with an algorithm that starts by splitting up the list into increasing segments and decreasing segments, but I haven't yet found something concrete to do with that idea.
I think the problem you're having with State is that your primitives (push, pop, peek) are not the right ones. I think a better one would be something like available_, which checks if the front of the stack matches a particular condition, and executes something different in each case:
available_ p f m = do
s <- get
case s of
x:xs | p x -> put xs >> f x
_ -> m
Actually, in our use case, we can specialize a bit: we will always want to return a Leaf when the head of our stack doesn't satisfy the condition, and we'll always want to recurse when it does.
available p m = available_ p
(\x -> liftM2 (Branch x) (lessThan' x) m)
(return Leaf)
(You could also just write available to begin with and skip available_ entirely. In my first iteration, that is what I did.) Now writing fromPreOrder and lessThan are a snap, and also I think give some insight into their behavior. I'll name them with primes so we can double-check they do the right thing with QuickCheck.
fromPreOrder' = available (const True) fromPreOrder'
lessThan' n = available (<n) (lessThan' n)
And in ghci:
> quickCheck (\xs -> fromPreOrder (xs :: [Int]) == evalState fromPreOrder' xs)
+++ OK, passed 100 tests.
While I can't answer the question about continuation passing, I believe that the State monad based implementation can be written much more clearly. First, we can use notational convenience such as those from Control.Applicative to make it easier to read. Second, we can upgrade the effect stack to include Maybe in order to capture the notion of failure (a) from taking the head of an empty list and (b) from the (a >= n) comparison as an effect.
import Control.Monad.State
import Control.Applicative
The final code uses the backtracking-state monad transformer stack. This means that we wrap State around Maybe instead of Maybe around State. In some sense we can think of this as meaning that failure is the "primary" effect. In practice it means that if the algorithm fails there's no way to continue using potentially bad state and so it must backtrack to the last known good state.
type Preord a b = StateT [a] Maybe b
Since we keep taking the head of a list and want to capture that failure correctly, we'll use a "safe head" function (which is the natural destructor of a list anyway, despite not being in the base Haskell libraries)
-- Safe list destructor
uncons :: [a] -> Maybe (a, [a])
uncons [] = Nothing
uncons (a:as) = Just (a, as)
If we look at it cleverly we'll notice that this is already exactly the form of our monadic computation (StateT [a] Maybe b is isomorphic to [a] -> Maybe (b, [a])). We'll give it a more evocative name when lifted into the Monad.
-- Try to get the head or fail
getHead :: Preord a a
getHead = StateT uncons
A common feature of this algorithm is stopping local failures by providing a default value. I'll capture this in the certain combinator
-- Provides a default value for a failing computation
certain :: b -> Preord a b -> Preord a b
certain def p = p <|> return def
And now we can write the final algorithm very cleanly in our Preord monad.
fromPreOrder :: Ord a => Preord a (Tree a)
fromPreOrder = certain Leaf $ do
a <- getHead
Branch a <$> lessThan a <*> fromPreOrder
lessThan :: Ord a => a -> Preord a (Tree a)
lessThan n = certain Leaf $ do
a <- getHead
guard (a < n)
Branch a <$> lessThan a <*> lessThan n
Note that Applicative style helps to indicate that we're building the components of the Branch constructor using further effectful (state consuming) computations. The guard short-circuits lessThan when the pivot is already the least element in the pre-order traversal. We also explicitly see how both fromPreOrder and lessThan default out to Leaf when they cannot compute a better result.
(Also note that fromPreOrder and lessThan are nearly identical now, a commonality Daniel Wagner exploited in his own answer when writing available.)
We finally would want to hide all the monadic noise since, to an outside user, this is just a pure algorithm.
rebuildTree :: [a] -> Tree a
rebuildTree = fromMaybe Leaf . runStateT fromPreOrder
For a complete picture, here's the implementation of the algorithm using only the State monad. Note all the extra noise for handling failure! We've absorbed the entire popElse function into the effects of the backtracking state monad. We also lift the if up into the failure effect. Without that effect stack, our combinators are terrifically specific to the application instead of decomplected and useful elsewhere.
-- Try to take the head of the state list and return the default
-- if that's not possible.
popElse :: b -> (a -> State [a] b) -> State [a] b
popElse def go = do
x <- get
case x of
[] -> return def
(a:as) -> put as >> go a
push :: a -> State [a] ()
push a = modify (a:)
fromPreOrder :: Ord a => State [a] (Tree a)
fromPreOrder = popElse Leaf $ \a -> Branch a <$> lessThan a <*> fromPreOrder
lessThan :: Ord a => a -> State [a] (Tree a)
lessThan n =
popElse Leaf $ \a ->
if a >= n
then push a >> return Leaf
else Branch a <$> lessThan a <*> lessThan n
As you've said, the state monad doesn't really improve the situation, and I don't think it can be expected to, as it's both much too general in that it allows arbitrary access to the state, and annoying in that it enforces unnecessary sequencing.
At first glance, this looks quite like a foldr : we do one thing for the empty case, and in the (:) case we take the head off and make a recursive call based on the tail. However, as the recursive call isn't just using the tail directly, it isn't quite a foldr.
We could express it as a paramorphism but I don't think that really adds anything to the readability.
What I did notice is that the complicated recursion on the tail is all based on lessThan, which led me to the following idea for breaking down the algorithm:
lessThans [] = []
lessThans (a:as) = (a, l) : lessThans bs
where (l, bs) = lessThan a as
fromPreOrder2 :: Ord a => [a] -> Tree a
fromPreOrder2 = foldr (\(a, l) r -> Branch a l r) Leaf . lessThans
I'm sure lessThans could have a better name but I'm not quite sure what!
The foldr can also be expressed as foldr (uncurry Branch) Leaf but I'm not sure if that's an improvement.
EDIT: also, lessThans is an unfoldr, leading to this version:
fromPreOrder3 :: Ord a => [a] -> Tree a
fromPreOrder3 = foldr (uncurry Branch) Leaf . unfoldr lessThanList
lessThanList [] = Nothing
lessThanList (a:as) = Just ((a, l), bs)
where (l, bs) = lessThan a as

Is there an indexed list in Haskell and is it good or bad?

I am a new comer to the Haskell world and I am wondering if there is something like this:
data IndexedList a = IList Int [a]
findIndex::(Int->Int)->IndexedList a->(a,IndexedList a)
findIndex f (IList x l) = (l!!(f x), IList (f x) l)
next::IndexedList a->(a,IndexedList a)
next x = findIndex (+1) x
I've noticed that this kind of list is not purely functional but kind of useful for some applications. Should it be considered harmful?
Thanks,
Bob
It's certainly useful to have a list that comes equipped with a pointed to a particular location in the list. However, the way it's usually done in Haskell is somewhat different - rather than using an explicit pointer, we tend to use a zipper.
The list zipper looks like this
data ListZipper a = LZ [a] a [a] deriving (Show)
You should think of the middle field a as being the element that is currently pointed to, the first field [a] as being the elements before the current position, and the final field [a] as being the elements after the current position.
Usually we store the elements before the current one in reverse order, for efficiency, so that the list [0, 1, 2, *3*, 4, 5, 6] with a pointer to the middle element, would be stored as
LZ [2,1,0] 3 [4,5,6]
You can define functions that move the pointer to the left or right
left (LZ (a:as) b bs) = LZ as a (b:bs)
right (LZ as a (b:bs)) = LZ (a:as) b bs
If you want to move to the left or right n times, then you can do that with the help of a function that takes another function, and applies it n times to its argument
times n f = (!!n) . iterate f
so that to move left three times, you could use
>> let lz = LZ [2,1,0] 3 [4,5,6]
>> (3 `times` left) lz
LZ [] 0 [1,2,3,4,5,6]
Your two functions findIndex and next can be written as
next :: ListZipper a -> (a, ListZipper a)
next = findIndex 1
findIndex :: Int -> ListZipper a -> (a, ListZipper a)
findIndex n x = let y#(LZ _ a _) = (n `times` right) x in (a, y)
Contrary to what you think this list is in fact purely functional. The reason is that IList (f x) l creates a new list (and does not, as you may think, modify the current IndexedList). It is in general not that easy to create non-purely functional data structures or functions in Haskell, as long as you stay away from unsafePerformIO.
The reason I would recommend against using the IndexedList is that there is no assurance that the index is less than the length of the list. In this case the lookup l!!(f x) will fail with an exception, which is generally considered bad style in Haskell. An alternative could be to use a safe lookup, which returns a Maybe a like the following:
findIndex :: (Int -> Int) -> IndexedList a -> (Maybe a, IndexedList a)
findIndex f (IList i l) = (maybe_x, IList new_i l)
where
new_i = f i
maybe_x = if new_i < length l
then Just (l !! newI)
else Nothing
I can also not think of a usecase where such a list would be useful, but I guess I am limited by my creativity ;)

Finding the leaves of an inductively-defined tree

So, I have a function of type:
genTree :: Node -> [Nodes]
Given a node, this function generates the set of children of that node in a tree. The function can be applied again to those children to generate their children, until it eventually generates a node with no children, i.e. a node for which genTree returns [].
What I'm trying to do is, given a starting node, generate the list of all leaf nodes in the tree that has it as the root.
Any advice?
The function from Martijn's answer generates a list of all nodes in the tree. You can use this list and filter out the nodes without children to get the leaves:
nodes root = root : concatMap nodes (genTree root)
leaves root = filter (null . genTree) (nodes root)
You can also combine these two functions into one to directly generate just a list of leaves, if you prefer:
leaves node
| null children = [node]
| otherwise = concatMap leaves children
where children = genTree node
Let's generalize it a bit:
leaves :: (a -> [a]) -> a -> [a]
leaves tree x = case (tree x) of
[] -> [x]
-- the node x has no children and is therefore a leaf
xs -> concatMap (leaves tree) xs
-- otherwise get list of all leaves for each child and concatenate them
Applying static argument transformation (http://hackage.haskell.org/trac/ghc/ticket/888), we get
leaves :: (a -> [a]) -> a -> [a]
leaves tree x = leaves' x where
leaves' x = case (tree x) of
[] -> [x]
xs -> concatMap leaves' xs
Use it as
leaves genTree root
or if you really want it to work only with genTree, inline it into the definition:
leaves1 root = case (genTree x) of
[] -> [x]
xs -> concatMap leaves1 xs
which is morally equivalent to sth's second answer.
(not exactly an answer to the question, but related)
I like to represent trees of a as "ListT [] a". (ListT from the List package in hackage)
Then the answer for this question is just to use the function lastL.
"Monad m => ListT m a" is a monadic list containing "a"s, where trying to get the next list item (which may find out there is no such item) is a monadic action in "m".
A usage example for ListT - a program that reads numbers from the user until the user does not type a number and prints the sum of numbers after each input:
main =
execute . joinM . fmap print .
scanl (+) 0 .
fmap (fst . head) .
takeWhile (not . null) .
fmap reads .
joinM $ (repeat getLine :: ListT IO (IO String))
Where repeat, scanl and takeWhile are from Data.List.Class. They work both for regular lists and monadic lists.
joinM :: List l => l (ItemM l a) -> l a -- (l = ListT IO, ItemM l = IO)
execute :: List l => l a -> ItemM l () -- consume the whole list and run its actions
If you are familiar with Python, python iterators/generators are "ListT IO"s.
When using [] instead of IO as the monad of the monadic list, the result is a tree. Why? Imagine a list where getting the next item is an action in the list monad - the list monad means there are several options, therefore there are several "next items", which makes it a tree.
You can construct monadic lists either with higher-order functions (like the example above), or with cons, or with a python-generator notation (with yield) using the GeneratorT monad transformer from the generator package in hackage.
Disclaimer: ListT and GeneratorT are in no way widely used. I wrote those and I am not aware of any other users except for myself. There are several of users of equivalent ListTs, such as the one from the Haskell wiki, NondetT, and others.
flatten node = node : concatMap flatten (genTree node)

Resources