Is it possible to implement a quicksort in Haskell (with RANDOM-PIVOT) that still has a simple Ord a => [a]->[a] signature?
I'm starting to understand Monads, and, for now, I'm kind of interpreting monads as somethink like a 'command pattern', which works great for IO.
So, I understand that a function that returns a random number should actually return a monadic value like IO, because, otherwise, it would break referential transparency. I also understand that there should be no way to 'extract' the random integer from the returned monadic value, because, otherwise, it would, again, break referential transparency.
But yet, I still think that it should be possible to implement a 'pure' [a]->[a] quicksort function, even if it uses random pivot, because, it IS referential transparent. From my point of view, the random pivot is just a implementation detail, and shouldn't change the function's signature
OBS: I'm not actually interested in the specific quicksort problem (so, I don't want to sound rude but I'm not looking for "use mergesort" or "random pivot doesn't increase performance in practice" kind of answers) I'm actually interested in how to implement a 'pure' function that uses 'impure' functions inside it, in cases like quicksort, where I can assure that the function actually is a pure one.
Quicksort is just a good example.
You are making a false assumption that picking the pivot point is just an implementation detail. Consider a partial ordering on a set. Like a quicksort on cards where
card a < card b if the face value is less but if you were to evaluate booleans:
4 spades < 4 hearts (false)
4 hearts < 4 spades (false)
4 hearts = 4 spades (false)
In that case the choice of pivots would determine the final ordering of the cards. In precisely the same way
for a function like
a = get random integer
b = a + 3
print b
is determined by a. If you are randomly choosing something then your computation is or could be non deterministic.
OK, check this out.
Select portions copied form the hashable package, and voodoo magic language pragmas
{-# LANGUAGE FlexibleInstances, UndecidableInstances, NoMonomorphismRestriction, OverlappingInstances #-}
import System.Random (mkStdGen, next, split)
import Data.List (foldl')
import Data.Bits (shiftL, xor)
class Hashable a where
hash :: a -> Int
instance (Integral a) => Hashable a where
hash = fromIntegral
instance Hashable Char where
hash = fromEnum
instance (Hashable a) => Hashable [a] where
hash = foldl' combine 0 . map hash
-- ask the authors of the hashable package about this if interested
combine h1 h2 = (h1 + h1 `shiftL` 5) `xor` h2
OK, so now we can take a list of anything Hashable and turn it into an Int. I've provided Char and Integral a instances here, more and better instances are in the hashable packge, which also allows salting and stuff.
This is all just so we can make a number generator.
genFromHashable = mkStdGen . hash
So now the fun part. Let's write a function that takes a random number generator, a comparator function, and a list. Then we'll sort the list by consulting the generator to select a pivot, and the comparator to partition the list.
qSortByGen _ _ [] = []
qSortByGen g f xs = qSortByGen g'' f l ++ mid ++ qSortByGen g''' f r
where (l, mid, r) = partition (`f` pivot) xs
pivot = xs !! (pivotLoc `mod` length xs)
(pivotLoc, g') = next g
(g'', g''') = split g'
partition f = foldl' step ([],[],[])
where step (l,mid,r) x = case f x of
LT -> (x:l,mid,r)
EQ -> (l,x:mid,r)
GT -> (l,mid,x:r)
Library functions: next grabs an Int from the generator, and produces a new generator. split forks the generator into two distinct generators.
My functions: partition uses f :: a -> Ordering to partition the list into three lists. If you know folds, it should be quite clear. (Note that it does not preserve the initial ordering of the elements in the sublists; it reverses them. Using a foldr could remedy this were it an issue.) qSortByGen works just like I said before: consult the generator for the pivot, partition the list, fork the generator for use in the two recursive calls, recursively sort the left and right sides, and concatenate it all together.
Convenience functions are easy to compose from here
qSortBy f xs = qSortByGen (genFromHashable xs) f xs
qSort = qSortBy compare
Notice the final function's signature.
ghci> :t qSort
qSort :: (Ord a, Hashable a) => [a] -> [a]
The type inside the list must implement both Hashable and Ord. There's the "pure" function you were asking for, with one logical added requirement. The more general functions are less restrictive in their requirements.
ghci> :t qSortBy
qSortBy :: (Hashable a) => (a -> a -> Ordering) -> [a] -> [a]
ghci> :t qSortByGen
qSortByGen
:: (System.Random.RandomGen t) =>
t -> (a -> a -> Ordering) -> [a] -> [a]
Final notes
qSort will behave exactly the same way for all inputs. The "random" pivot selection is. in fact, deterministic. But it is obscured by hashing the list and then seeding a random number generator, making it "random" enough for me. ;)
qSort also only works for lists with length less than maxBound :: Int, which ghci tells me is 9,223,372,036,854,775,807. I thought there would be an issue with negative indexes, but in my ad-hoc testing I haven't run into it yet.
Or, you can just live with the IO monad for "truer" randomness.
qSortIO xs = do g <- getStdGen -- add getStdGen to your imports
return $ qSortByGen g compare xs
ghci> :t qSortIO
qSortIO :: (Ord a) => [a] -> IO [a]
ghci> qSortIO "Hello world"
" Hdellloorw"
ghci> qSort "Hello world"
" Hdellloorw"
In such cases, where you know that the function is referentially transparent, but you can't proof it to the compiler, you may use the function unsafePerformIO :: IO a -> a from the module Data.Unsafe.
For instance, you may use unsafePerformIO to get an initial random state and then do anything using just this state.
But please notice: Don't use it if it's not really needed. And even then, think twice about it. unsafePerformIO is somewhat the root of all evil, since it's consequences can be dramatical - anything is possible from coercing different types to crashing the RTS using this function.
Haskell provides the ST monad to perform non-referentially-transparent actions with a referentially transparent result.
Note that it doesn't enforce referential transparency; it just insures that potentially non-referentially-transparent temporary state can't leak out. Nothing can prevent you from returning manipulated pure input data that was rearranged in a non-reproducible way. Best is to implement the same thing in both ST and pure ways and use QuickCheck to compare them on random inputs.
Related
The foldr identity is
foldr (:) []
More generally, with folds you can either destroy structure and end up with a summary value or inject structure in such a way that you end up with the same output structure.
[Int] -> [Int]
or
[Int] -> Int
or
[Int] -> ?
I'm wondering if there a similar identity with unfoldr/l.
I know how to get
Int -> [Int]
with unfold/ana.
I'm looking for some kind of way to go from
Int -> Int
with a recursion scheme.
Taking a cue from your remark about factorials, we can note that natural numbers can be treated as a recursive data structure:
data Nat = Zero | Succ Nat
In terms of the recursion-schemes machinery, the corresponding base functor would be:
data NatF a = ZeroF | SuccF a
deriving (Functor)
NatF, however, is isomorphic to Maybe. That being so, recursion-schemes conveniently makes Maybe the base functor of the Natural type from base. For instance, here is the type of ana specialised to Natural:
ana #Natural :: (a -> Maybe a) -> a -> Natural
We can use it to write the identity unfold for Natural:
{-# LANGUAGE LambdaCase #-}
import Numeric.Natural
import Data.Functor.Foldable
idNatAna :: Natural -> Natural
idNatAna = ana $ \case
0 -> Nothing
x -> Just (x - 1)
The coalgebra we just gave to ana is project for Natural, project being the function that unwraps one layer of the recursive structure. In terms of the recursion-schemes vocabulary, ana project is the identity unfold, and cata embed is the identity fold. (In particular, project for lists is uncons from Data.List, except that it is encoded with ListF instead of Maybe.)
By the way, the factorial function can be expressed as a paramorphism on naturals (as pointed out in the note at the end of this question). We can also implement that in terms of recursion-schemes:
fact :: Natural -> Natural
fact = para $ \case
Nothing -> 1
Just (predec, prod) -> prod * (predec + 1)
para makes available, at each recursive step, the rest of the structure to be folded (if we were folding a list, that would be its tail). In this case, I have called the value thus provided predec because at the n-th recursive step from bottom to top predec is n - 1.
Note that user11228628's hylomorphism is probably a more efficient implementation, if you happen to care about that. (I haven't benchmarked them, though.)
The kind of recursion scheme that deals with building up an intermediate structure and tearing it down, so that the structure doesn't appear in the input or output, is a hylomorphism, spelled hylo in recursion-schemes.
To use a hylomorphism, you need to specify an algebra (something that consumes one step of a recursive structure) and a coalgebra (something that produces one step of a recursive structure), and you need to have a data type for the kind of structure you're using, of course.
You suggested factorial, so let's look into how to write that as a hylomorphism.
One way to look at factorial is as the product of a list of numbers counting down from the initial n. In this framing, we can think of the product as our algebra, tearing down the list one cons at a time, and the count-down as our coalgebra, building up the list as n is decremented.
recursion-schemes gives us ListF as a handy base functor for lists, so we'll use that as the data type produced by the coalgebra and consumed by the algebra. Its constructors are Nil and Cons, which of course resemble the constructors for full lists, except that a ListF, like any base structure in a recursion scheme, uses a type parameter in the place that lists would use actual recursion (meaning that Cons :: a -> b -> ListF a b instead of (:) :: a -> [a] -> [a]).
So that determines our types. Now defining fact is a rather fill-in-the-blanks exercise:
import Prelude hiding (product)
import Data.Functor.Foldable
product :: ListF Int Int -> Int
product Nil = 1
product (Cons a b) = a * b
countDown :: Int -> ListF Int Int
countDown 0 = Nil
countDown n = Cons n (n - 1)
fact :: Int -> Int
fact = hylo product countDown
Suppose we want to write our own sum function in Haskell:
sum' :: (Num a) => [a] -> a
sum' [] = 0
sum' (x:xs) = x + sum' xs
Why can't we do something like:
sum' :: (Num a) => [a] -> a
sum' [] = 0
sum' (xs++[x]) = x + sum' xs
In other words why can't we use ++ in pattern matching ?
This is a deserving question, and it has so far received sensible answers (mutter only constructors allowed, mutter injectivity, mutter ambiguity), but there's still time to change all that.
We can say what the rules are, but most of the explanations for why the rules are what they are start by over-generalising the question, addressing why we can't pattern match against any old function (mutter Prolog). This is to ignore the fact that ++ isn't any old function: it's a (spatially) linear plugging-stuff-together function, induced by the zipper-structure of lists. Pattern matching is about taking stuff apart, and indeed, notating the process in terms of the plugger-togetherers and pattern variables standing for the components. Its motivation is clarity. So I'd like
lookup :: Eq k => k -> [(k, v)] -> Maybe v
lookup k (_ ++ [(k, v)] ++ _) = Just v
lookup _ _ = Nothing
and not only because it would remind me of the fun I had thirty years ago when I implemented a functional language whose pattern matching offered exactly that.
The objection that it's ambiguous is a legitimate one, but not a dealbreaker. Plugger-togetherers like ++ offer only finitely many decompositions of finite input (and if you're working on infinite data, that's your own lookout), so what's involved is at worst search, rather than magic (inventing arbitrary inputs that arbitrary functions might have thrown away). Search calls for some means of prioritisation, but so do our ordered matching rules. Search can also result in failure, but so, again, can matching.
We have a sensible way to manage computations offering alternatives (failure and choice) via the Alternative abstraction, but we are not used to thinking of pattern matching as a form of such computation, which is why we exploit Alternative structure only in the expression language. The noble, if quixotic, exception is match-failure in do-notation, which calls the relevant fail rather than necessarily crashing out. Pattern matching is an attempt to compute an environment suitable for the evaluation of a 'right-hand side' expression; failure to compute such an environment is already handled, so why not choice?
(Edit: I should, of course, add that you only really need search if you have more than one stretchy thing in a pattern, so the proposed xs++[x] pattern shouldn't trigger any choices. Of course, it takes time to find the end of a list.)
Imagine there was some sort of funny bracket for writing Alternative computations, e.g., with (|) meaning empty, (|a1|a2|) meaning (|a1|) <|> (|a2|), and a regular old (|f s1 .. sn|) meaning pure f <*> s1 .. <*> sn. One might very well also imagine (|case a of {p1 -> a1; .. pn->an}|) performing a sensible translation of search-patterns (e.g. involving ++) in terms of Alternative combinators. We could write
lookup :: (Eq k, Alternative a) => k -> [(k, v)] -> a k
lookup k xs = (|case xs of _ ++ [(k, v)] ++ _ -> pure v|)
We may obtain a reasonable language of search-patterns for any datatype generated by fixpoints of differentiable functors: symbolic differentiation is exactly what turns tuples of structures into choices of possible substructures. Good old ++ is just the sublists-of-lists example (which is confusing, because a list-with-a-hole-for-a-sublist looks a lot like a list, but the same is not true for other datatypes).
Hilariously, with a spot of LinearTypes, we might even keep hold of holey data by their holes as well as their root, then plug away destructively in constant time. It's scandalous behaviour only if you don't notice you're doing it.
You can only pattern match on constructors, not on general functions.
Mathematically, a constructor is an injective function: each combination of arguments gives one unique value, in this case a list. Because that value is unique, the language can deconstruct it again into the original arguments. I.e., when you pattern match on :, you essentially use the function
uncons :: [a] -> Maybe (a, [a])
which checks if the list is of a form you could have constructed with : (i.e., if it is non-empty), and if yes, gives you back the head and tail.
++ is not injective though, for example
Prelude> [0,1] ++ [2]
[0,1,2]
Prelude> [0] ++ [1,2]
[0,1,2]
Neither of these representations is the right one, so how should the list be deconstructed again?
What you can do however is define a new, “virtual” constructor that acts like : in that it always seperates exactly one element from the rest of the list (if possible), but does so on the right:
{-# LANGUAGE PatternSynonyms, ViewPatterns #-}
pattern (:>) :: [a] -> a -> [a]
pattern (xs:>ω) <- (unsnoc -> Just (xs,ω))
where xs:>ω = xs ++ [ω]
unsnoc :: [a] -> Maybe ([a], a)
unsnoc [] = Nothing
unsnoc [x] = Just x
unsnoc (_:xs) = unsnoc xs
Then
sum' :: Num a => [a] -> a
sum' (xs:>x) = x + sum xs
sum' [] = 0
Note that this is very inefficient though, because the :> pattern-synonym actually needs to dig through the entire list, so sum' has quadratic rather than linear complexity.
A container that allows pattern matching on both the left and right end efficiently is Data.Sequence, with its :<| and :|> pattern synonyms.
You can only pattern-match on data constructors, and ++ is a function, not a data constructor.
Data constructors are persistent; a value like 'c':[] cannot be simplified further, because it is a fundamental value of type [Char]. An expression like "c" ++ "d", however, can replaced with its equivalent "cd" at any time, and thus couldn't reliably be counted on to be present for pattern matching.
(You might argue that "cd" could always replaced by "c" ++ "d", but in general there isn't a one-to-one mapping between a list and a decomposition via ++. Is "cde" equivalent to "c" ++ "de" or "cd" ++ "e" for pattern matching purposes?)
++ isn't a constructor, it's just a plain function. You can only match on constructors.
You can use ViewPatterns or PatternSynonyms to augment your ability to pattern match (thanks #luqui).
Context
Most Haskell tutorials I know (e.g. LYAH) introduce newtypes as a cost-free idiom that allows enforcing more type safety. For instance, this code will type-check:
type Speed = Double
type Length = Double
computeTime :: Speed -> Length -> Double
computeTime v l = l / v
but this won't:
newtype Speed = Speed { getSpeed :: Double }
newtype Length = Length { getLength :: Double }
-- wrong!
computeTime :: Speed -> Length -> Double
computeTime v l = l / v
and this will:
-- right
computeTime :: Speed -> Length -> Double
computeTime (Speed v) (Length l) = l / v
In this particular example, the compiler knows that Speed is just a Double, so the pattern-matching is moot and will not generate any executable code.
Question
Are newtypes still cost-free when they appear as arguments of parametric types? For instance, consider a list of newtypes:
computeTimes :: [Speed] -> Length -> [Double]
computeTimes vs l = map (\v -> getSpeed v / l) vs
I could also pattern-match on speed in the lambda:
computeTimes' :: [Speed] -> Length -> [Double]
computeTimes' vs l = map (\(Speed v) -> v / l) vs
In either case, for some reason, I feel that real work is getting done! I start to feel even more uncomfortable when the newtype is buried within a deep tree of nested parametric datatypes, e.g. Map Speed [Set Speed]; in this situation, it may be difficult or impossible to pattern-match on the newtype, and one would have to resort to accessors like getSpeed.
TL;DR
Will the use of a newtype never ever incur a cost, even when the newtype appears as a (possibly deeply-buried) argument of another parametric type?
On their own, newtypes are cost-free. Applying their constructor, or pattern matching on them has zero cost.
When used as parameter for other types e.g. [T] the representation of [T] is precisely the same as the one for [T'] if T is a newtype for T'. So, there's no loss in performance.
However, there are two main caveats I can see.
newtypes and instances
First, newtype is frequently used to introduce new instances of type classes. Clearly, when these are user-defined, there's no guarantee that they have the same cost as the original instances. E.g., when using
newtype Op a = Op a
instance Ord a => Ord (Op a) where
compare (Op x) (Op y) = compare y x
comparing two Op Int will cost slightly more than comparing Int, since the arguments need to be swapped. (I am neglecting optimizations here, which might make this cost free when they trigger.)
newtypes used as type arguments
The second point is more subtle. Consider the following two implementations of the identity [Int] -> [Int]
id1, id2 :: [Int] -> [Int]
id1 xs = xs
id2 xs = map (\x->x) xs
The first one has constant cost. The second has a linear cost (assuming no optimization triggers). A smart programmer should prefer the first implementation, which is also simpler to write.
Suppose now we introduce newtypes on the argument type, only:
id1, id2 :: [Op Int] -> [Int]
id1 xs = xs -- error!
id2 xs = map (\(Op x)->x) xs
We can no longer use the constant cost implementation because of a type error. The linear cost implementation still works, and is the only option.
Now, this is quite bad. The input representation for [Op Int] is exactly, bit by bit, the same for [Int]. Yet, the type system forbids us to perform the identity in an efficient way!
To overcome this issue, safe coercions where introduced in Haskell.
id3 :: [Op Int] -> [Int]
id3 = coerce
The magic coerce function, under certain hypotheses, removes or inserts newtypes as needed to make type match, even inside other types, as for [Op Int] above. Further, it is a zero-cost function.
Note that coerce works only under certain conditions (the compiler checks for them). One of these is that the newtype constructor must be visible: if a module does not export Op :: a -> Op a you can not coerce Op Int to Int or vice versa. Indeed, if a module exports the type but not the constructor, it would be wrong to make the constructor accessible anyway through coerce. This makes the "smart constructors" idiom still safe: modules can still enforce complex invariants through opaque types.
It doesn't matter how deeply buried a newtype is in a stack of (fully) parametric types. At runtime, the values v :: Speed and w :: Double are completely indistinguishable – the wrapper is erased by the compiler, so even v is really just a pointer to a single 64-bit floating-point number in memory. Whether that pointer is stored in a list or tree or whatever doesn't make a difference either. getSpeed is a no-op and will not appear at runtime in any way at all.
So what do I mean by “fully parametric”? The thing is, newtypes can obviously make a difference at compile time, via the type system. In particular, they can guide instance resolution, so a newtype that invokes a different class method may certainly have worse (or, just as easily, better!) performance than the wrapped type. For example,
class Integral n => Fibonacci n where
fib :: n -> Integer
instance Fibonacci Int where
fib = (fibs !!)
where fibs = [ if i<2 then 1
else fib (i-2) + fib (i-1)
| i<-[0::Int ..] ]
this implementation is pretty slow, because it uses a lazy list (and performs lookups in it over and over again) for memoisation. On the other hand,
import qualified Data.Vector as Arr
-- | A number between 0 and 753
newtype SmallInt = SmallInt { getSmallInt :: Int }
instance Fibonacci SmallInt where
fib = (fibs Arr.!) . getSmallInt
where fibs = Arr.generate 754 $
\i -> if i<2 then 1
else fib (SmallInt $ i-2) + fib (SmallInt $ i-1)
This fib is much faster, because thanks to the input being limited to a small range, it is feasible to strictly allocate all of the results and store them in a fast O (1) lookup array, not needing the spine-laziness.
This of course applies again regardless of what structure you store the numbers in. But the different performance only comes about because different method instantiations are called – at runtime this means simply, completely different functions.
Now, a fully parametric type constructor must be able to store values of any type. In particular, it cannot impose any class restrictions on the contained data, and hence also not call any class methods. Therefore this kind of performance difference can not happen if you're just dealing with generic [a] lists or Map Int a maps. It can, however, occur when you're dealing with GADTs. In this case, even the actual memory layout might be completely differet, for instance with
{-# LANGUAGE GADTs #-}
import qualified Data.Vector as Arr
import qualified Data.Vector.Unboxed as UArr
data Array a where
BoxedArray :: Arr.Vector a -> Array a
UnboxArray :: UArr.Unbox a => UArr.Vector a -> Array a
might allow you to store Double values more efficiently than Speed values, because the former can be stored in a cache-optimised unboxed array. This is only possible because the UnboxArray constructor is not fully parametric.
The term general (contrary to specialized) in the question means the function can sort the items as long as they are of a type that is an instance of Ord.
Consider one of the most famous haskell ads
quicksort :: Ord a => [a] -> [a]
quicksort [] = []
quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater)
where
lesser = filter (< p) xs
greater = filter (>= p) xs
The above implementation is not in-place.
I was trying to write an in-place version.
It's easy to make quicksort in-place. Usually, we just need a mutable array and I chose Foreign.Marshal.Array.
My implementation is in-place and runs very well, but I am not satisfied with its type signature
(Ord a, Storable a) => [a] -> IO [a]
To be more precise, the type constraint Storable a annoyed me.
Obviously, if we want to sort items, Ord constraint is needed, while Storable is unnecessary.
In contrast, the type signatures of the classic quicksort or sort in Data.List, is Ord a => [a] -> [a]. The constraint is just Ord.
I didn't find a way to get rid of the additional constraint.
I searched Stackoverflow, and found some questions about in-place quicksort in haskell, e.g.
How do you do an in-place quicksort in Haskell
Why is the minimalist, example Haskell quicksort not a "true" quicksort?
Unfortunately, their major concern is just in-place. All of the in-place quicksort examples given there have additional type constraints as well.
For example, iqsort given by klapaucius has the type signature
iqsort :: (Vector v a, Ord a) => v a -> v a
Does anyone know how to implement an in-place quicksort haskell function with type signature Ord a => [a] -> [a]?
I know how to make an in-place quicksort, but I don't know how to make it general.
iqsort actually looks fully general to me. If you look at the Data.Vector.Generic haddocks, you in fact can use that interface for any a! The difference is that the function as given is more generic, because it allows you to choose an unboxed vector, which of course only works over some a.
Here's the link: http://hackage.haskell.org/packages/archive/vector/0.10.0.1/doc/html/Data-Vector-Generic.html
So if you pick your V to be boxed, the vector constraint goes away.
Yes it is possible. (Although in Haskell you want to use this kind of imperative algorithms only in cases where you really need top performance.)
I know of 2 such algorithms:
sort from vector-algorithms.
qsort (or introsort) from marray-sort, which I haven't released to Hackage yet. (Let me know if you need it.) It works on mutable arrays.
(Introsort is basically refined quicksort that has O(n log n) worst case complexity.)
I'm not sure about MVector, but for MArrays, you don't have to worry about the additional constraints MArray a e m. They're there to make the type more general, not less. Signatures like
qsort :: (MArray a e m, Ord e) => a Int e -> m ()
allow to use the same algorithm for different array representations. For some data types, you can have specialized arrays of that type which are faster and more compact than generic arrays. For example, if you want to sort 8-bit integers, there is a specialized instance MArray IOUArray Int8 IO for unboxed arrays. And a specialization of qsort for this kind of arrays just using polymorphism is
qsort :: IOUArray Int Int8 -> IO ()
But you also have instance MArray IOArray e IO that works arbitrary e. By using qsort with IOArray, you get a specialization without constraints on e:
qsort :: (Ord e) => IOArray Int e -> IO ()
Furthermore, if you use STArrays and the ST monad, you can sort an array in-place using the same function, and get the result later as a pure value, without IO.
I've been looking at the source for Data.MemoCombinators but I can't really see where the heart of it is.
Please explain to me what the logic is behind all of these combinators and the mechanics of how they actually work to speed up your program in real world programming.
I'm looking for specifics for this implementation, and optionally comparison/contrast with other Haskell approaches to memoization. I understand what memoization is and am not looking for a description of how it works in general.
This library is a straightforward combinatorization of the well-known technique of memoization. Let's start with the canonical example:
fib = (map fib' [0..] !!)
where
fib' 0 = 0
fib' 1 = 1
fib' n = fib (n-1) + fib (n-2)
I interpret what you said to mean that you know how and why this works. So I'll focus on the combinatorization.
We are essentiallly trying to capture and generalize the idea of (map f [0..] !!). The type of this function is (Int -> r) -> (Int -> r), which makes sense: it takes a function from Int -> r and returns a memoized version of the same function. Any function which is semantically the identity and has this type is called a "memoizer for Int" (even id, which doesn't memoize). We generalize to this abstraction:
type Memo a = forall r. (a -> r) -> (a -> r)
So a Memo a, a memoizer for a, takes a function from a to anything, and returns a semantically identical function that has been memoized (or not).
The idea of the different memoizers is to find a way to enumerate the domain with a data structure, map the function over them, and then index the data structure. bool is a good example:
bool :: Memo Bool
bool f = table (f True, f False)
where
table (t,f) True = t
table (t,f) False = f
Functions from Bool are equivalent to pairs, except a pair will only evaluate each component once (as is the case for every value that occurs outside a lambda). So we just map to a pair and back. The essential point is that we are lifting the evaluation of the function above the lambda for the argument (here the last argument of table) by enumerating the domain.
Memoizing Maybe a is a similar story, except now we need to know how to memoize a for the Just case. So the memoizer for Maybe takes a memoizer for a as an argument:
maybe :: Memo a -> Memo (Maybe a)
maybe ma f = table (f Nothing, ma (f . Just))
where
table (n,j) Nothing = n
table (n,j) (Just x) = j x
The rest of the library is just variations on this theme.
The way it memoizes integral types uses a more appropriate structure than [0..]. It's a bit involved, but basically just creates an infinite tree (representing the numbers in binary to elucidate the structure):
1
10
100
1000
1001
101
1010
1011
11
110
1100
1101
111
1110
1111
So that looking up a number in the tree has running time proportional to the number of bits in its representation.
As sclv points out, Conal's MemoTrie library uses the same underlying technique, but uses a typeclass presentation instead of a combinator presentation. We released our libraries independently at the same time (indeed, within a couple hours!). Conal's is easier to use in simple cases (there is only one function, memo, and it will determine the memo structure to use based on the type), whereas mine is more flexible, as you can do things like this:
boundedMemo :: Integer -> Memo Integer
boundedMemo bound f = \z -> if z < bound then memof z else f z
where
memof = integral f
Which only memoizes values less than a given bound, needed for the implementation of one of the project euler problems.
There are other approaches, for example exposing an open fixpoint function over a monad:
memo :: MonadState ... m => ((Integer -> m r) -> (Integer -> m r)) -> m (Integer -> m r)
Which allows yet more flexibility, eg. purging caches, LRU, etc. But it is a pain in the ass to use, and also it puts strictness constraints on the function to be memoized (e.g. no infinite left recursion). I don't believe there are any libraries that implement this technique.
Did that answer what you were curious about? If not, perhaps make explicit the points you are confused about?
The heart is the bits function:
-- | Memoize an ordered type with a bits instance.
bits :: (Ord a, Bits a) => Memo a
bits f = IntTrie.apply (fmap f IntTrie.identity)
It is the only function (except the trivial unit :: Memo ()) which can give you a Memo a value. It uses the same idea as in this page about Haskell memoization. Section 2 shows the simplest memoization strategy using a list and section 3 does the same using a binary tree of naturals similar to the IntTree used in memocombinators.
The basic idea is to use a construction like (map fib [0 ..] !!) or in the memocombinators case - IntTrie.apply (fmap f IntTrie.identity). The thing to notice here is the correspondance between IntTie.apply and !! and also between IntTrie.identity and [0..].
The next step is memoizing functions with other types of arguments. This is done with the wrap function which uses an isomorphism between types a and b to construct a Memo b from a Memo a. For example:
Memo.integral f
=>
wrap fromInteger toInteger bits f
=>
bits (f . fromInteger) . toInteger
=>
IntTrie.apply (fmap (f . fromInteger) IntTrie.identity) . toInteger
~> (semantically equivalent)
(map (f . fromInteger) [0..] !!) . toInteger
The rest of the source code deals with types like List, Maybe, Either and memoizing multiple arguments.
Some of the work is done by IntTrie: http://hackage.haskell.org/package/data-inttrie-0.0.4
Luke's library is a variation of Conal's MemoTrie library, which he described here: http://conal.net/blog/posts/elegant-memoization-with-functional-memo-tries/
Some further expansion -- the general notion behind functional memoization is to take a function from a -> b and map it across a datastructure indexed by all possible values of a and containing values of b. Such a datastructure should be lazy in two ways -- first it should be lazy in the values it holds. Second, it should be lazily produced itself. The former is by default in a nonstrict language. The latter is accomplished by using generalized tries.
The various approaches of memocombinators, memotrie, etc are all just ways of creating compositions of pieces of tries over individual types of datastructures to allow for the simple construction of tries for increasingly complex structures.
#luqui One thing that is not clear to me: does this have the same operational behaviour as the following:
fib :: [Int]
fib = map fib' [0..]
where fib' 0 = 0
fib' 1 = 1
fib' n = fib!!(n-1) + fib!!(n-2)
The above should memoize fib at the top level, and hence if you define two functions:
f n = fib!!n + fib!!(n+1)
If we then compute f 5, we obtain that fib 5 is not recomputed when computing fib 6. It is not clear to me whether the memoization combinators have the same behaviour (i.e. top-level memoization instead of only prohibiting the recomputation "inside" the fib computation), and if so, why exactly?