I am looking for a data structure in Haskell that supports both fast indexing and fast append. This is for a memoization problem which arises from recursion.
From the way vectors work in c++ (which are mutable, but that shouldn't matter in this case) it seems immutable vectors with both (amortized) O(1) append and O(1) indexing should be possible (ok, it's not, see comments to this question). Is this poossible in Haskell or should I go with Data.Sequence which has (AFAICT anyway) O(1) append and O(log(min(i,n-i))) indexing?
On a related note, as a Haskell newbie I find myself longing for a practical, concise guide to Haskell data structures. Ideally this would give a fairly comprehensive overview over the most practical data structures along with performance characteristics and pointers to Haskell libraries where they are implemented. It seems that there is a lot of information out there, but I have found it to be a little scattered. Am I asking too much?
For simple memoization problems, you typically want to build the table once and then not modify it later. In that case, you can avoid having to worry about appending, by instead thinking of the construction of the memoization table as one operation.
One method is to take advantage of lazy evaluation and refer to the table while we're constructing it.
import Data.Array
fibs = listArray (0, n-1) $ 0 : 1 : [fibs!(i-1) + fibs!(i-2) | i <- [2..n-1]]
where n = 100
This method is especially useful when the dependencies between the elements of the table makes it difficult to come up with a simple order of evaluating them ahead of time. However, it requires using boxed arrays or vectors, which may make this approach unsuitable for large tables due to the extra overhead.
For unboxed vectors, you have operations like constructN which lets you build a table in a pure way while using mutation underneath to make it efficient. It does this by giving the function you pass an immutable view of the prefix of the vector constructed so far, which you can then use to compute the next element.
import Data.Vector.Unboxed as V
fibs = constructN 100 f
where f xs | i < 2 = i
| otherwise = xs!(i-1) + xs!(i-2)
where i = V.length xs
If memory serves, C++ vectors are implemented as an array with bounds and size information. When an insertion would increase the bounds beyond the size, the size is doubled. This is amortized O(1) time insertion (not O(1) as you claim), and can be emulated just fine in Haskell using the Array type, perhaps with suitable IO or ST prepended.
Take a look at this to make a more informed choice of what you should use.
But the simple thing is, if you want the equivalent of a C++ vector, use Data.Vector.
Related
so far I've only found vector and sequences, but neither of those could replace an element of a list in O(1). Such data structure would of course violate the immutable character of Haskells structures, but maybe there still exist some dirty implementation?
Every feedback is welcom.
As you suggest yourself – I'm also pretty sure a safe, purely-functional update in O(1) is not possible. What is possible is in O(log n) with a tree-like implementation; for instance, instead of [a] you could use Data.Map.Map Int a with a contiguous region of indices. Also, it is possible to do a batch update of k ≤ n elements in a list or vector, in only O(n) instead of the O(k·n) it would take to manually insert them one-by-one. Check out //.
If none of that is fast enough for you, then yes, you will need to go into the dark realm of mutability. Fortunately, Haskell offers a good safety armour and flashlight for such journeys: the ST monad. The way it works is, you wrap the entire region where you need to do mutable updates in runST. Inside that region, you use MVectors, which support O(1) mutable element updates, much like you could in an imperative language. But thanks to a type-system trick, runST ensures that all these side-effects stay confined to within the local scope.
I am looking for a "List" with log(N) insert/delete at index i. I put the word "List" in quotes, because I don't mean it to be an actual Haskell List, but just any ordered container with a bunch of objects (in fact, it probably needs some sort of tree internally). I am surprised that I haven't found a great solution yet....
This is the best solution I have so far- Use Data.Sequence with these two functions.
doInsert position value seq = before >< ( value <| after)
where
(before, after) = splitAt position seq
doDelete position seq = before >< (drop 1 after)
where
(before, after) = splitAt position seq
While these are technically all log(N) functions, it feels like I am doing a bunch of extra stuff for each insert/delete.... In other words, this scales like K*log(N) for a K that is larger than it should be. Plus, since I have to define this stuff myself, I feel like I am using Sequence for something it wasn't designed for.
Is there a better way?
This is a related question (although it only deals with Sequences, I would happily use anything else): Why doesn't Data.Sequence have `insert' or `insertBy', and how do I efficiently implement them?
And yes, this question is related to this other one I posted a few days ago: Massive number of XML edits
There are structures such as catenable seqs, catenable queues, etc. that can give you O(1) join. However, all such structures I know of get away with this by giving you O(i) split. If you want split and join both to be as optimal as possible, I think a fingertree (aka Sequence) is your best bet. The entire point of the way Sequence is structured is to ensure that splitting, joining, etc. have an asymptotically not terrible amount of juggling to do when splitting and joining, among other things.
If you could get away with an IntMap which started sparse and got denser, and was occasionally "rebuilt" when things got too dense, then that might give you better performance -- but that really depends on your patterns of use.
If you work carefully with predefined Haskell Lists, there should be no problem with them.
(For example when concatenating two lists).
If you'd like to find an implementation for lists, with efficient insertion and deletion, an AVLTree or any kind or balanced binary tree would work.
For example store in an AVLTree a tuple (Int, a) where Int is the index of the list and a the elem.
Therefore, on average complexity, the operations would be logarithmic for insertion and deletion.
I hope this answers your question.
I'm learning Haskell and read a couple of articles regarding performance differences of Haskell lists and (insert your language)'s arrays.
Being a learner I obviously just use lists without even thinking about performance difference.
I recently started investigating and found numerous data structure libraries available in Haskell.
Can someone please explain the difference between Lists, Arrays, Vectors, Sequences without going very deep in computer science theory of data structures?
Also, are there some common patterns where you would use one data structure instead of another?
Are there any other forms of data structures that I am missing and might be useful?
Lists Rock
By far the most friendly data structure for sequential data in Haskell is the List
data [a] = a:[a] | []
Lists give you ϴ(1) cons and pattern matching. The standard library, and for that matter the prelude, is full of useful list functions that should litter your code (foldr,map,filter). Lists are persistant , aka purely functional, which is very nice. Haskell lists aren't really "lists" because they are coinductive (other languages call these streams) so things like
ones :: [Integer]
ones = 1:ones
twos = map (+1) ones
tenTwos = take 10 twos
work wonderfully. Infinite data structures rock.
Lists in Haskell provide an interface much like iterators in imperative languages (because of laziness). So, it makes sense that they are widely used.
On the other hand
The first problem with lists is that to index into them (!!) takes ϴ(k) time, which is annoying. Also, appends can be slow ++, but Haskell's lazy evaluation model means that these can be treated as fully amortized, if they happen at all.
The second problem with lists is that they have poor data locality. Real processors incur high constants when objects in memory are not laid out next to each other. So, in C++ std::vector has faster "snoc" (putting objects at the end) than any pure linked list data structure I know of, although this is not a persistant data structure so less friendly than Haskell's lists.
The third problem with lists is that they have poor space efficiency. Bunches of extra pointers push up your storage (by a constant factor).
Sequences Are Functional
Data.Sequence is internally based on finger trees (I know, you don't want to know this) which means that they have some nice properties
Purely functional. Data.Sequence is a fully persistant data structure.
Darn fast access to the beginning and end of the tree. ϴ(1) (amortized) to get the first or last element, or to append trees. At the thing lists are fastest at, Data.Sequence is at most a constant slower.
ϴ(log n) access to the middle of the sequence. This includes inserting values to make new sequences
High quality API
On the other hand, Data.Sequence doesn't do much for the data locality problem, and only works for finite collections (it is less lazy than lists)
Arrays are not for the faint of heart
Arrays are one of the most important data structures in CS, but they dont fit very well with the lazy pure functional world. Arrays provide ϴ(1) access to the middle of the collection and exceptionally good data locality/constant factors. But, since they dont fit very well into Haskell, they are a pain to use. There are actually a multitude of different array types in the current standard library. These include fully persistant arrays, mutable arrays for the IO monad, mutable arrays for the ST monad, and un-boxed versions of the above. For more check out the haskell wiki
Vector is a "better" Array
The Data.Vector package provides all of the array goodness, in a higher level and cleaner API. Unless you really know what you are doing, you should use these if you need array like performance. Of-course, some caveats still apply--mutable array like data structures just dont play nice in pure lazy languages. Still, sometimes you want that O(1) performance, and Data.Vector gives it to you in a useable package.
You have other options
If you just want lists with the ability to efficiently insert at the end, you can use a difference list. The best example of lists screwing up performance tends to come from [Char] which the prelude has aliased as String. Char lists are convient, but tend to run on the order of 20 times slower than C strings, so feel free to use Data.Text or the very fast Data.ByteString. I'm sure there are other sequence oriented libraries I'm not thinking of right now.
Conclusion
90+% of the time I need a sequential collection in Haskell lists are the right data structure. Lists are like iterators, functions that consume lists can easily be used with any of these other data structures using the toList functions they come with. In a better world the prelude would be fully parametric as to what container type it uses, but currently [] litters the standard library. So, using lists (almost) every where is definitely okay.
You can get fully parametric versions of most of the list functions (and are noble to use them)
Prelude.map ---> Prelude.fmap (works for every Functor)
Prelude.foldr/foldl/etc ---> Data.Foldable.foldr/foldl/etc
Prelude.sequence ---> Data.Traversable.sequence
etc
In fact, Data.Traversable defines an API that is more or less universal across any thing "list like".
Still, although you can be good and write only fully parametric code, most of us are not and use list all over the place. If you are learning, I strongly suggest you do too.
EDIT: Based on comments I realize I never explained when to use Data.Vector vs Data.Sequence. Arrays and Vectors provide extremely fast indexing and slicing operations, but are fundamentally transient (imperative) data structures. Pure functional data structures like Data.Sequence and [] let efficiently produce new values from old values as if you had modified the old values.
newList oldList = 7 : drop 5 oldList
doesn't modify old list, and it doesn't have to copy it. So even if oldList is incredibly long, this "modification" will be very fast. Similarly
newSequence newValue oldSequence = Sequence.update 3000 newValue oldSequence
will produce a new sequence with a newValue for in the place of its 3000 element. Again, it doesn't destroy the old sequence, it just creates a new one. But, it does this very efficiently, taking O(log(min(k,k-n)) where n is the length of the sequence, and k is the index you modify.
You cant easily do this with Vectors and Arrays. They can be modified but that is real imperative modification, and so cant be done in regular Haskell code. That means operations in the Vector package that make modifications like snoc and cons have to copy the entire vector so take O(n) time. The only exception to this is that you can use the mutable version (Vector.Mutable) inside the ST monad (or IO) and do all your modifications just like you would in an imperative language. When you are done, you "freeze" your vector to turn in into the immutable structure you want to use with pure code.
My feeling is that you should default to using Data.Sequence if a list is not appropriate. Use Data.Vector only if your usage pattern doesn't involve making many modifications, or if you need extremely high performance within the ST/IO monads.
If all this talk of the ST monad is leaving you confused: all the more reason to stick to pure fast and beautiful Data.Sequence.
I am writing program that does alot of table lookups. As such, I was perusing the Haskell documentation when I stumbled upon Data.Map (of course), but also Data.HashMap and Data.Hashtable. I am no expert on hashing algorithms and after inspecting the packages they all seem really similar. As such I was wondering:
1: what are the major differences, if any?
2: Which would be the most performant with a high volume of lookups on maps/tables of ~4000 key-value pairs?
1: What are the major differences, if any?
Data.Map.Map is a balanced binary tree internally, so its time complexity for lookups is O(log n). I believe it's a "persistent" data structure, meaning it's implemented such that mutative operations yield a new copy with only the relevant parts of the structure updated.
Data.HashMap.Map is a Data.IntMap.IntMap internally, which in turn is implemented as Patricia tree; its time complexity for lookups is O(min(n, W)) where W is the number of bits in an integer. It is also "persistent.". New versions (>= 0.2) use hash array mapped tries. According to the documentation: "Many operations have a average-case complexity of O(log n). The implementation uses a large base (i.e. 16) so in practice these operations are constant time."
Data.HashTable.HashTable is an actual hash table, with time complexity O(1) for lookups. However, it is a mutable data structure -- operations are done in-place -- so you're stuck in the IO monad if you want to use it.
2: Which would be the most performant with a high volume of lookups on maps/tables of ~4000 key-value pairs?
The best answer I can give you, unfortunately, is "it depends." If you take the asymptotic complexities literally, you get O(log 4000) = about 12 for Data.Map, O(min(4000, 64)) = 64 for Data.HashMap and O(1) = 1 for Data.HashTable. But it doesn't really work that way... You have to try them in the context of your code.
The obvious difference between Data.Map and Data.HashMap is that the former needs keys in Ord, the latter Hashable keys. Most of the common keys are both, so that's not a deciding criterion. I have no experience whatsoever with Data.HashTable, so I can't comment on that.
The APIs of Data.HashMap and Data.Map are very similar, but Data.Map exports more functions, some, like alter are absent in Data.HashMap, others are provided in strict and non-strict variants, while Data.HashMap (I assume you meant the hashmap from unordered-containers) provides lazy and strict APIs in separate modules. If you are using only the common part of the API, switching is really painless.
Concerning performance, Data.HashMap of unordered-containers has pretty fast lookup, last I measured, it was clearly faster than Data.IntMap or Data.Map, that holds in particular for the (not yet released) HAMT branch of unordered-containers. I think for inserts, it was more or less on par with Data.IntMap and somewhat faster than Data.Map, but I'm a bit fuzzy on that.
Both are sufficiently performant for most tasks, for those tasks where they aren't, you'll probably need a tailor-made solution anyway. Considering that you ask specifically about lookups, I would give Data.HashMap the edge.
Data.HashTable's documentation now says "use the hashtables package". There's a nice blog post explaining why hashtables is a good package here. It uses the ST monad.
This question is about the Data.Vector package.
Given the fact that I'll never use the old value of a certain cell once the cell is updated. Will the update operation always create a new vector, reflecting the update, or will it be done as an in-place update ?
Note: I know about Data.Vector.Mutable
No, but something even better can happen.
Data.Vector is built using "stream fusion". This means that if the sequence of operations that you are performing to build up and then tear down the vector can be fused away, then the Vector itself will never even be constructed and your code will turn into an optimized loop.
Fusion works by turning code that would build vectors into code that builds up and tears down streams and then puts the streams into a form that the compiler can see to perform optimizations.
So code that looks like
foo :: Int
foo = sum as
where as, bs, cs, ds, es :: Vector Int
as = map (*100) bs
bs = take 10 cs
cs = zipWith (+) (generate 1000 id) ds
ds = cons 1 $ cons 3 $ map (+2) es
es = replicate 24000 0
despite appearing to build up and tear down quite a few very large vectors can fuse all the way down to an inner loop that only calculates and adds 10 numbers.
Doing what you proposed is tricky, because it requires that you know that no references to a term exist anywhere else, which imposes a cost on any attempt to copy a reference into an environment. Moreover, it interacts rather poorly with laziness. You need to attach little affine addenda to the thunk you conspicuously didn't evaluate yet. But to do this in a multithreaded environment is scarily race prone and hard to get right.
Well, how exactly should the compiler see that "the old vector is not used anywhere"? Say we have a function that changes a vector:
changeIt :: Vector Int -> Int -> Vector Int
changeIt vec n = vec // [(0,n)]
Just from this definition, the compiler cannot assume that vec represents the only reference to the vector in question. We would have to annotate the function so it can only be used in this way - which Haskell doesn't support (but Clean does, as far as I know).
So what can we do in Haskell? Let us say we have another silly function:
changeItTwice vec n = changeIt (changeIt vec n) (n+1)
Now GHC could inline changeIt, and indeed "see" that no reference to the intermediate structure escapes. But typically, you would use this information to not produce that intermediate data structure, instead directly generating the end result!
This is a pretty common optimization (for lists, there is fusion, for example) - and I think it plays pretty much exactly the role you have in mind: Limiting the number of times a data structure needs to be copied. Whether or not this approach is more flexible than in-place-updates is up for debate, but you can definitely recover a lot of performance without having to break abstraction by annotating uniqueness properties.
(However, I think that Vector currently does not, in fact, perform this specific optimization. Might need a few more optimizer rules...)
IMHO this is certainly impossible as the GHC garbage collector may go havoc if you randomly change an object (even if it is not used anymore). That is because the object may be moved into an older generation and mutation could introduce pointers to a younger generation. If now the younger generation gets garbage collected, the object may move and thus the pointer may become invalid.
AFAIK, all mutable objects in Haskell are located on a special heap that is treated differently by the GC, so that such problems can't occur.
Not necessarily. Data.Vector uses stream fusion, so depending on your use the vector may not be created at all and the program may compile to an efficient constant space loop.
This mostly applies to operations that transform the entire vector rather than just updating a single cell, though.