Let's say we have existing tree-like data and we would like to add information about depth of each node. How can we easily achieve that?
Data Tree = Node Tree Tree | Leaf
For each node we would like to know in constant complexity how deep it is. We have the data from external module, so we have information as it is shown above. Real-life example would be external HTML parser which just provides the XML tree and we would like to gather data e.g. how many hyperlinks every node contains.
Functional languages are created for traversing trees and gathering data, there should be an easy solution.
Obvious solution would be creating parallel structure. Can we do better?
The standard trick, which I learned from Chris Okasaki's wonderful Purely Functional Data Structures is to cache the results of expensive operations at each node. (Perhaps this trick was known before Okasaki's thesis; I don't know.) You can provide smart constructors to manage this information for you so that constructing the tree need not be painful. For example, when the expensive operation is depth, you might write:
module SizedTree (SizedTree, sizedTree, node, leaf, depth) where
data SizedTree = Node !Int SizedTree SizedTree | Leaf
node l r = Node (max (depth l) (depth r) + 1) l r
leaf = Leaf
depth (Node d _ _) = d
depth Leaf = 0
-- since we don't expose the constructors, we should
-- provide a replacement for pattern matching
sizedTree f v (Node _ l r) = f l r
sizedTree f v Leaf = v
Constructing SizedTrees costs O(1) extra work at each node (hence it is O(n) work to convert an n-node Tree to a SizedTree), but the payoff is that checking the depth of a SizedTree -- or of any subtree -- is an O(1) operation.
You do need some another data where you can store these Ints. Define Tree as
data Tree a = Node Tree a Tree | Leaf a
and then write a function
annDepth :: Tree a -> Tree (Int, a)
Your original Tree is Tree () and with pattern synonyms you can recover nice constructors.
If you want to preserve the original tree for some reason, you can define a view:
{-# LANGUAGE GADTs, DataKinds #-}
data Shape = SNode Shape Shape | SLeaf
data Tree a sh where
Leaf :: a -> Tree a SLeaf
Node :: Tree a lsh -> a -> Tree a rsh -> Tree a (SNode lsh rsh)
With this you have a guarantee that an annotated tree has the same shape as the unannotated. But this doesn't work good without proper dependent types.
Also, have a look at the question Boilerplate-free annotation of ASTs in Haskell?
The standard solution is what #DanielWagner suggested, just extend the data structure. This can be somewhat inconvenient, but can be solved: Smart constructors for creating instances and using records for pattern matching.
Perhaps Data types a la carte could help, although I haven't used this approach myself. There is a library compdata based on that.
A completely different approach would be to efficiently memoize the values you need. I was trying to solve a similar problem and one of the solutions is provided by the library stable-memo. Note that this isn't a purely functional approach, as the library is internally based on object identity, but the interface is pure and works perfectly for the purpose.
Related
I'm reading Purely Functional Data Structres and trying to solve exercises they give in haskell.
I've defined Tree in a standard way data Tree a = Empty | Node a (Tree a) (Tree a)
I'd like to define Set as Tree where nodes are instance of Ord. Is there a way to express this in Haskell? Something like type Set = Tree Ord or I deemed to reimplement tree every time I want express some data structure as tree?
Do you intend to use the tree as a binary search tree (BST)? For that to work, you need all operations to preserve the strong BST property: for each Node constructor in the BST, all Nodes in the left subtree contain elements less than the one in the current Node, all Nodes in the right subtree contain elements greater.
You absolutely need to preserve that property. Once it's no longer true, all further BST operations lose correctness guarantees. This conclusion has consequences. You can't expose the constructors of the type. You can't expose operations to work on it that don't preserve the BST property.
So every operation that operates on a set needs to have access to the Ord instance for the type constrained in the node. (Well, other than a couple special cases like checking if a set is empty or creating a singleton set, which never have to deal with the order of children.)
At that point, what exactly can you share about the tree type with other uses? You can't share operations. You can't use the constructors. That leaves, well.. nothing useful. You could share the name of the type with no ways to do much of anything with it.
So, no.. You can't share a binary search tree data type with other uses that just seek an arbitrary binary tree. Attempting to do so just results in a broken BST.
This is exactly what Data.Set module from containers library does:
http://hackage.haskell.org/package/containers-0.5.10.2/docs/src/Data-Set-Internal.html#Set
You can try to read it's implementation. type Set = Tree Ord won't compile if you're not using some exotic language extensions. But you're probably don't want to use them and more interested in functions like:
insert :: Ord a => a -> Tree a -> Tree a
find :: Ord a => a -> Tree a -> Maybe a
delete :: Ord a => a -> Tree a -> Tree a
This is how type classes meant to be used.
A few years ago, during a C# course I learned to write a binary tree that looked more or less like this:
data Tree a = Branch a (Tree a) (Tree a) | Leaf
I saw the benefit of it, it had its values on the branches, which allowed for quick and easy lookup and insertion of values, because it would encounter a value on the root of each branch all the way down until it hit a leaf, that held no value.
Ever since I started learning Haskell, however; I've seen numerous examples of trees that are defined like this:
data Tree a = Branch (Tree a) (Tree a) | Leaf a
That definition puzzles me. I can't see the usefulness of having data on the elements that don't branch, because it would end up leading to a tree that looks like this:
Which to me, seems like a poorly designed alternative to a List. It also makes me question the lookup time of it, since it can't asses which branch to go down to find the value it's looking for; but rather needs to go through every node to find what it's looking for.
So, can anyone shed some light on why the second version (value on leaves) is so much more prevalent in Haskell than the first version?
I think this depends on what you're trying to model and how you're trying to model it.
A tree where the internal nodes store values and the leaves are just leaves is essentially a standard binary tree (tree each leaf as NULL and you basically have an imperative-style binary tree). If the values are stored in sorted order, you now have a binary search tree. There are many specific advantages to storing data this way, most of which transfer directly over from imperative settings.
Trees where the leaves store the data and the internal nodes are just for structure do have their advantages. For example, red/black trees support two powerful operations called split and join that have advantages in some circumstances. split takes as input a key, then destructively modifies the tree to produce two trees, one of which contains all keys less than the specified input key and one containing the remaining keys. join is, in a sense, the opposite: it takes in two trees where one tree's values are all less than the other tree's values, then fuses them together into a single tree. These operations are particularly difficult to implement on most red/black trees, but are much simpler if all the data is stored in the leaves only rather than in the internal nodes. This paper detailing an imperative implementation of red/black trees mentions that some older implementations of red/black trees used this approach for this very reason.
As another potential advantage of storing keys in the leaves, suppose that you want to implement the concatenate operation, which joins two lists together. If you don't have data in the leaves, this is as simple as
concat first second = Branch first second
This works because no data is stored in those nodes. If the data is stored in the leaves, you need to somehow move a key from one of the leaves up to the new concatenation node, which takes more time and is trickier to work with.
Finally, in some cases, you might want to store the data in the leaves because the leaves are fundamentally different from internal nodes. Consider a parse tree, for example, where the leaves store specific terminals from the parse and the internal nodes store all the nonterminals in the production. In this case, there really are two different types of nodes, so it doesn't make sense to store arbitrary data in the internal nodes.
Hope this helps!
You described a tree with data at the leaves as "a poorly designed alternative to a List."
I agree that this could be used as an alternative to a list, but it's not necessarily poorly designed! Consider the data type
data Tree t = Leaf t | Branch (Tree t) (Tree t)
You can define cons and snoc (append to end of list) operations -
cons :: t -> Tree t -> Tree t
cons t (Leaf s) = Branch (Leaf t) (Leaf s)
cons t (Branch l r) = Branch (cons t l) r
snoc :: Tree t -> t -> Tree t
snoc (Leaf s) t = Branch (Leaf s) (Leaf t)
snoc (Branch l r) t = Branch l (snoc r t)
These run (for roughly balanced lists) in O(log n) time where n is the length of the list. This contrasts with the standard linked list, which has O(1) cons and O(n) snoc operations. You can also define a constant-time append (as in templatetypedef's answer)
append :: Tree t -> Tree t -> Tree t
append l r = Branch l r
which is O(1) for two lists of any size, whereas the standard list is O(n) where n is the length of the left argument.
In practice you would want to define slightly smarter versions of these functions which attempt to keep the tree balanced. To do this it is often useful to have some additional information at the branches, which could be done by having multiple kinds of branch (as in a red-black tree which has "red" and "black" nodes) or explicitly include additional data at the branches, as in
data Tree b a = Leaf a | Branch b (Tree b a) (Tree b a)
For example, you can support an O(1) size operation by storing the total number of elements in both subtrees in the nodes. All of your operations on the tree become slightly more complicated since you need to correctly persist the information about subtree sizes -- in effect the work of computing the size of the tree is amortized over all the operations that construct the tree (and cleverly persisted, so that minimal work is done whenever you need to reconstruct a size later).
More is better worse more. I'll explain just a couple basic considerations to show why your intuition fails. The general idea, though, is that different data structures need different things.
Empty leaf nodes can actually be a space (and therefore time) problem in some contexts. If a node is represented by a bit of information and two pointers to its children, you'll end up with two null pointers per node whose children are both leaves. That's two machine words per leaf node, which can add up to quite a bit of space. Some structures avoid this by ensuring that each leaf holds at least one piece of information to justify its existence. In some cases (such as ropes), each leaf may have a fairly large and dense payload.
Making internal nodes bigger (by storing information in them) makes it more expensive to modify the tree. Changing a leaf in a balanced tree typically forces you to allocate replacements for O(log n) internal nodes. If each of those is larger, you've just allocated more space and spent extra time to copy more words. The extra size of the internal nodes also means that you can fit less of the tree structure into the CPU cache.
In Scheme, the primitive eq? tests whether its arguments are the same object. For example, in the following list
(define lst
(let (x (list 'a 'b))
(cons x x)))
The result of
(eq? (car x) (cdr x))
is true, and moreover it is true without having to peer into (car x) and (cdr x). This allows you to write efficient equality tests for data structures that have a lot of sharing.
Is the same thing ever possible in Haskell? For example, consider the following binary tree implementation
data Tree a = Tip | Bin a (Tree a) (Tree a)
left (Bin _ l _) = l
right (Bin _ _ r) = r
mkTree n :: Int -> Tree Int
mkTree 0 = Tip
mkTree n = let t = mkTree (n-1) in Bin n t t
which has sharing at every level. If I create a tree with let tree = mkTree 30 and I want to see if left tree and right tree are equal, naively I have to traverse over a billion nodes to discover that they are the same tree, which should be obvious because of data sharing.
I don't expect there is a simple way to discover data sharing in Haskell, but I wondered what the typical approaches to dealing with issues like this are, when it would be good to detect sharing for efficiency purposes (or e.g. to detect cyclic data structures).
Are there unsafe primitives that can detect sharing? Is there a well-known way to build data structures with explicit pointers, so that you can compare pointer equality?
There's lots of approaches.
Generate unique IDs and stick everything in a finite map (e.g. IntMap).
The refined version of the last choice is to make an explicit graph, e.g. using fgl.
Use stable names.
Use IORefs (see also), which have both Eq and Ord instances regardless of the contained type.
There are libraries for observable sharing.
As mentioned above, there is reallyUnsafePtrEquality# but you should understand what's really unsafe about it before you use it!
See also this answer about avoiding equality checks altogether.
It is not possible in Haskell, the pure language.
But in its implementation in GHC, there are loopholes, such as
the use of reallyUnsafePtrEquality# or
introspection libraries like ghc-heap-view.
In any case, using this in regular code would be very unidiomatic; at most I could imagine that building a highly specialized library for something (memoizatoin, hash tables, whatever) that then provides a sane, pure API, might be acceptable.
There is reallyUnsafePtrEquality#. Also see here
Sometimes I get myself using different types of trees in Haskell and I don't know what they are called or where to get more information on algorithms using them or class instances for them, or even some pre-existing code or library on hackage.
Examples:
Binary trees where the labels are on the leaves or the branches:
data BinTree1 a = Leaf |
Branch {label :: a, leftChild :: BinTree1 a, rightChild :: BinTree1 a}
data BinTree2 a = Leaf {label :: a} |
Branch {leftChild :: BinTree2 a, rightChild :: BinTree2 a}
Similarly trees with the labels for each children node or a general label for all their children:
data Tree1 a = Branch {label :: a, children :: [Tree1 a]}
data Tree2 a = Branch {labelledChildren :: [(a, Tree2 a)]}
Sometimes I start using Tree2 and somehow on the course of developing it gets refactored into Tree1, which seems simpler to deal with, but I never gave a lot of thought about it. Is there some kind of duality here?
Also, if you can post some other different kinds of trees that you think are useful, please do.
In summary: everything you can tell me about those trees will be useful! :)
Thanks.
EDIT:
Clarification: this is not homework. It's just that I usually end up using those data types and creating instances (Functor, Monad, etc...) and maybe if I new their names I would find libraries with stuff implemented and more theoretical information on them.
Usually when a library on Hackage have Tree in the name, it implements BinTree2 or some version of a non-binary tree with labels only on the leaves, so it seems to me that maybe Tree2 and BinTree2 have some other name or identifier.
Also I feel that there may be some kind of duality or isomorphism, or a way of turning code that uses Tree1 into code that uses Tree2 with some transformation. Is there? May be it's just an impression.
The names I've heard:
BinTree1 is a binary tree
BinTree2 don't know a name but you can use such a tree to represent a prefix-free code like huffman coding for example
Tree1 is a Rose tree
Tree2 is isomoprhic to [Tree1] (a forest of Tree1) or another way to view it is a Tree1 without a label for the root.
A binary tree that only has labels in the leaves (BinTree2) is usually used for hash maps, because the tree structure itself doesn't offer any information other than the binary position of the leaves.
So, if you have 4 values with the following hash codes:
...000001 A
...000010 B
...000011 C
...000010 D
... you might store them in a binary tree (an implicit patricia trie) like so:
+ <- Bit #1 (least significant bit) of hash code
/ \ 0 = left, 1 = right
/ \
[B, D] + <- Bit #2
/ \
/ \
[A] [C]
We see that since the hash codes of B and D "start" with 0, they are stored in the left root child. They have exactly the same hash codes, so no more forks are necessary. The hash codes of A and C both "start" with 1, so another fork is necessary. A has bit 2 as 0, so it goes to the left, and C with 1 goes to the right.
This hash table implementation is kind of bad, because hashes might have to be recomputed when certain elements are inserted, but no matter.
BinTree1 is just an ordinary binary tree, and is used for fast order-based sets. Nothing more to say about it, really.
The only difference between Tree1 and Tree2 is that Tree2 can't have root node labels. This means that if used as a prefix tree, it cannot contain the empty string. It has very limited use, and I haven't seen anything like it in practice. Tree1, however, obviously has an use as a non-binary prefix tree, as I said.
I'm currently trying to come up with a data structure that fits the needs of two automata learning algorithms I'd like to implement in Haskell: RPNI and EDSM.
Intuitively, something close to what zippers are to trees would be perfect: those algorithms are state merging algorithms that maintain some sort of focus (the Blue Fringe) on states and therefore would benefit of some kind of zippers to reach interesting points quickly. But I'm kinda lost because a DFA (Determinist Finite Automaton) is more a graph-like structure than a tree-like structure: transitions can make you go back in the structure, which is not likely to make zippers ok.
So my question is: how would you go about representing a DFA (or at least its transitions) so that you could manipulate it in a fast fashion?
Let me begin with the usual opaque representation of automata in Haskell:
newtype Auto a b = Auto (a -> (b, Auto a b))
This represents a function that takes some input and produces some output along with a new version of itself. For convenience it's a Category as well as an Arrow. It's also a family of applicative functors. Unfortunately this type is opaque. There is no way to analyze the internals of this automaton. However, if you replace the opaque function by a transparent expression type you should get automata that you can analyze and manipulate:
data Expr :: * -> * -> * where
-- Stateless
Id :: Expr a a
-- Combinators
Connect :: Expr a b -> Expr b c -> Expr a c
-- Stateful
Counter :: (Enum b) => b -> Expr a b
This gives you access to the structure of the computation. It is also a Category, but not an arrow. Once it becomes an arrow you have opaque functions somewhere.
Can you just use a graph to get started? I think the fgl package is part of the Haskell Platform.
Otherwise you can try defining your own structure with 'deriving (Data)' and use the "Scrap Your Zipper" library to get the Zipper.
If you don't need any fancy graph algorithms you can represent your DFA as a Map State State. This gives you fast access and manipulation. You also get focus by keeping track of the current state.
Take a look at the regex-tdfa package: http://hackage.haskell.org/package/regex-tdfa
The source is pretty complex, but it's an implementations of regexes with tagged DFAs tuned for performance, so it should illustrate some good practices for representing DFAs efficiently.