Performance of predicate on the length of a list - haskell

I have read (and also reasoned) that calculating the length of a list is not good for performance in Haskell. However, i have long lists in my program and my requirement is to find that if length is greater than or less than some number X.
Is their already something in build in Haskell for these kind of predicates ? or i have to resort to manual looping.

On vanilla lists, you can check this using drop:
cmpLen :: Int -> [a] -> Ordering
cmpLen n xs
| n < 0 = GT
| otherwise = case drop (n-1) xs of
[] -> GT
[_] -> EQ
_ -> LT
However, this still takes as long as the value (not size, as is typical of asymptotic analysis!) of n. If you intend to do this often, you can take a cue from Okasaki and build a new structure that caches the operation you want to be efficient. I have wanted this a few times before, and found the following sort of interface convenient in those cases:
type LenList a = (Sum Word, [a])
singleton x = (1, [x])
cons x = (singleton x<>)
length = getSum . fst
elems = snd
Note that, since LenList a is already a Monoid, you get some of the usual operations for free, e.g. there is an empty LenList a named mempty, and concatenation is given by (<>). Some operations (notably the ones that produce infinite lists) will not be implementable for this type. However, you pay an O(1) price on each construction operation to make asking for the length of one of these O(1), which can be a nice tradeoff in many situations.

Related

Haskell - why would I use infinite data structures?

In Haskell, it is possible to define infinite lists like so:
[1.. ]
If found many articles which describe how to implement infinite lists and I understood how this works.
However, I cant think of any reason to use the concept of infinite datastructures.
Can someone give me an example of a problem, which can be solved easier (or maybe only) with an infinite list in Haskell?
The basic advantage of lists in Haskell is that they’re a control structure that looks like a data structure. You can write code that operates incrementally on streams of data, but it looks like simple operations on lists. This is in contrast to other languages that require the use of an explicitly incremental structure, like iterators (Python’s itertools), coroutines (C# IEnumerable), or ranges (D).
For example, a sort function can be written in such a way that it sorts as few elements as possible before it starts to produce results. While sorting the entire list takes O(n log n) / linearithmic time in the length of the list, minimum xs = head (sort xs) only takes O(n) / linear time, because head will only examine the first constructor of the list, like x : _, and leave the tail as an unevaluated thunk that represents the remainder of the sorting operation.
This means that performance is compositional: for example, if you have a long chain of operations on a stream of data, like sum . map (* 2) . filter (< 5), it looks like it would first filter all the elements, then map a function over them, then take the sum, producing a full intermediate list at every step. But what happens is that each element is only processed one at a time: given [1, 2, 6], this basically proceeds as follows, with all the steps happening incrementally:
total = 0
1 < 5 is true
1 * 2 == 2
total = 0 + 2 = 2
2 < 5 is true
2 * 2 == 4
total = 2 + 4 = 6
6 < 5 is false
result = 6
This is exactly how you would write a fast loop in an imperative language (pseudocode):
total = 0;
for x in xs {
if (x < 5) {
total = total + x * 2;
}
}
This means that performance is compositional: because of laziness, this code has constant memory usage during the processing of the list. And there is nothing special inside map or filter that makes this happen: they can be entirely independent.
For another example, and in the standard library computes the logical AND of a list, e.g. and [a, b, c] == a && b && c, and it’s implemented simply as a fold: and = foldr (&&) True. The moment it reaches a False element in the input, it stops evaluation, simply because && is lazy in its right argument. Laziness gives you composition!
For a great paper on all this, read the famous Why Functional Programming Matters by John Hughes, which goes over the advantages of lazy functional programming (in Miranda, a forebear of Haskell) far better than I could.
Annotating a list with its indices temporarily uses an infinite list of indices:
zip [0..] ['a','b','c','d'] = [(0,'a'), (1,'b'), (2,'c'), (3,'d')]
Memoizing functions while maintaining purity (in this case this transformation causes an exponential speed increase, because the memo table is used recursively):
fib = (memo !!)
where
memo = map fib' [0..] -- cache of *all* fibonacci numbers (evaluated on demand)
fib' 0 = 0
fib' 1 = 1
fib' n = fib (n-1) + fib (n-2)
Purely mocking programs with side-effects (free monads)
data IO a = Return a
| GetChar (Char -> IO a)
| PutChar Char (IO a)
Potentially non-terminating programs are represented with infinite IO strucutres; e.g. forever (putChar 'y') = PutChar 'y' (PutChar 'y' (PutChar 'y' ...))
Tries: if you define a type approximately like the following:
data Trie a = Trie a (Trie a) (Trie a)
it can represent an infinite collection of as indexed by the naturals. Note that there is no base case for the recursion, so every Trie is infinite. But the element at index n can be accessed in log(n) time. Which means you can do something like this (using some functions in the inttrie library):
findIndices :: [Integer] -> Trie [Integer]
findIndices = foldr (\(i,x) -> modify x (i:)) (pure []) . zip [0..]
this builds an efficient "reverse lookup table" which given any value in the list can tell you at which indices it occurs, and it both caches results and streams information as soon as it's available:
-- N.B. findIndices [0, 0,1, 0,1,2, 0,1,2,3, 0,1,2,3,4...]
> table = findIndices (concat [ [0..n] | n <- [0..] ])
> table `apply` 0
[0,1,3,6,10,15,21,28,36,45,55,66,78,91,...
all from a one-line infinite fold.
I'm sure there are more examples, there are so many cool things you can do.

Lazy Catalan Numbers in Haskell

How might I go about efficiently generating an infinite list of Catalan numbers? What I have now works reasonably quickly, but it seems to me that there should be a better way.
c 1 = 1
c n = sum (zipWith (*) xs (reverse xs)) : xs
where xs = c (n-1)
catalan = map (head . c) [1..]
I made an attempt at using fix instead, but the lambda isn't lazy enough for the computation to terminate:
catalan = fix (\xs -> xs ++ [zipWith (*) xs (reverse xs)])
I realize (++) isn't ideal
Does such a better way exist? Can that function be made sufficiently lazy? There's an explicit formula for the nth, I know, but I'd rather avoid it.
The Catalan numbers [wiki] can be defined inductively with:
C0 = 1 and Cn+1=(4n+2)×Cn/(n+2).
So we can implement this as:
catalan :: Integral i => [i]
catalan = xs
where xs = 1 : zipWith f [0..] xs
f n cn = div ((4*n+2) * cn) (n+2)
For example:
Prelude> take 10 catalan
[1,1,2,5,14,42,132,429,1430,4862]
I'm guessing you're looking for a lazy, infinite, self-referential list of all the Catalan numbers using one of the basic recurrence relations. That's a common thing to do with the Fibonacci numbers after all. But it would help to specify the recurrence relation you mean, if you want answers to your specific question. I'm guessing this is the one you mean:
cat :: Integer -> Integer
cat 1 = 1
cat n = sum [ cat i * cat (n - i) | i <- [1 .. n - 1] ]
If so, the conversion to a self-referential form looks like this:
import Data.List (inits)
cats :: [Integer]
cats = 1 : [ sum (zipWith (*) pre (reverse pre)) | pre <- tail (inits cats) ]
This is quite a lot more complex than the fibonacci examples, because the recurrence refers to all previous entries in the list, not just a fixed small number of the most recent. Using inits from Data.List is the easiest way to get the prefix at each position. I used tail there because its first result is the empty list, and that's not helpful here. The rest is a straight-forward rewrite of the recurrence relation that I don't have much to say about. Except...
It's going to perform pretty badly. I mean, it's better than the exponential recursive calls of my cat function, but there's a lot of list manipulation going on that's allocating and then throwing away a lot of memory cells. That recurrence relation is not a very good fit for the recursive structure of the list data type. You can explore a lot of ways to make it more efficient, but they'll all be pretty bad in the end. For this particular case, going to a closed-form solution is the way to go if you want performance.
Apparently, what you wanted is
> cats = 1 : unfoldr (\ fx -> let x = sum $ zipWith (*) fx cats in Just (x, x:fx)) [1]
> take 10 cats
[1,1,2,5,14,42,132,429,1430,4862]
This avoids the repeated reversing of the prefixes (as in the linked answer), by unfolding with the state being a reversed prefix while consing onto the state as well as producing the next element.
The non-reversed prefix we don't have to maintain, as zipping the reversed prefix with the catalans list itself takes care of the catalans prefix's length.
You did say you wanted to avoid the direct formula.
The Catalan numbers are best understood by their generating function, which satisfies the relation
f(t) = 1 + t f(t)^2
This can be expressed in Haskell as
f :: [Int]
f = 1 : convolve f f
for a suitable definition of convolve. It is helpful to factor out convolve, for many other counting problems take this form. For example, a generalized Catalan number enumerates ternary trees, and its generating function satisfies the relation
g(t) = 1 + t g(t)^3
which can be expressed in Haskell as
g :: [Int]
g = 1 : convolve g (convolve g g)
convolve can be written using Haskell primitives as
convolve :: [Int] -> [Int] -> [Int]
convolve xs = map (sum . zipWith (*) xs) . tail . scanl (flip (:)) []
For these two examples and many other special cases, there are formulas that are quicker to evaluate. convolve is however more general, and cognitively more efficient. In a typical scenario, one has understood a counting problem in terms of a polynomial relation on its generating function, and now wants to compute some numbers in order to look them up in The On-Line Encyclopedia of Integer Sequences. One wants to get in and out, indifferent to complexity. What language will be least fuss?
If one has seen the iconic Haskell definition for the Fibonacci numbers
fibs :: [Int]
fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
then one imagines there must be a similar idiom for products of generating functions. That search is what brought me here.

Long working of program that count Ints

I want to write program that takes array of Ints and length and returns array that consist in position i all elements, that equals i, for example
[0,0,0,1,3,5,3,2,2,4,4,4] 6 -> [[0,0,0],[1],[2,2],[3,3],[4,4,4],[5]]
[0,0,4] 7 -> [[0,0],[],[],[],[4],[],[]]
[] 3 -> [[],[],[]]
[2,2] 3 -> [[],[],[2,2]]
So, that's my solution
import Data.List
import Data.Function
f :: [Int] -> Int -> [[Int]]
f ls len = g 0 ls' [] where
ls' = group . sort $ ls
g :: Int -> [[Int]] -> [[Int]] -> [[Int]]
g val [] accum
| len == val = accum
| otherwise = g (val+1) [] (accum ++ [[]])
g val (x:xs) accum
| len == val = accum
| val == head x = g (val+1) xs (accum ++ [x])
| otherwise = g (val+1) (x:xs) (accum ++ [[]])
But query f [] 1000000 works really long, why?
I see we're accumulating over some data structure. I think foldMap. I ask "Which Monoid"? It's some kind of lists of accumulations. Like this
newtype Bunch x = Bunch {bunch :: [x]}
instance Semigroup x => Monoid (Bunch x) where
mempty = Bunch []
mappend (Bunch xss) (Bunch yss) = Bunch (glom xss yss) where
glom [] yss = yss
glom xss [] = xss
glom (xs : xss) (ys : yss) = (xs <> ys) : glom xss yss
Our underlying elements have some associative operator <>, and we can thus apply that operator pointwise to a pair of lists, just like zipWith does, except that when we run out of one of the lists, we don't truncate, rather we just take the other. Note that Bunch is a name I'm introducing for purposes of this answer, but it's not that unusual a thing to want. I'm sure I've used it before and will again.
If we can translate
0 -> Bunch [[0]] -- single 0 in place 0
1 -> Bunch [[],[1]] -- single 1 in place 1
2 -> Bunch [[],[],[2]] -- single 2 in place 2
3 -> Bunch [[],[],[],[3]] -- single 3 in place 3
...
and foldMap across the input, then we'll get the right number of each in each place. There should be no need for an upper bound on the numbers in the input to get a sensible output, as long as you are willing to interpret [] as "the rest is silence". Otherwise, like Procrustes, you can pad or chop to the length you need.
Note, by the way, that when mappend's first argument comes from our translation, we do a bunch of ([]++) operations, a.k.a. ids, then a single ([i]++), a.k.a. (i:), so if foldMap is right-nested (which it is for lists), then we will always be doing cheap operations at the left end of our lists.
Now, as the question works with lists, we might want to introduce the Bunch structure only when it's useful. That's what Control.Newtype is for. We just need to tell it about Bunch.
instance Newtype (Bunch x) [x] where
pack = Bunch
unpack = bunch
And then it's
groupInts :: [Int] -> [[Int]]
groupInts = ala' Bunch foldMap (basis !!) where
basis = ala' Bunch foldMap id [iterate ([]:) [], [[[i]] | i <- [0..]]]
What? Well, without going to town on what ala' is in general, its impact here is as follows:
ala' Bunch foldMap f = bunch . foldMap (Bunch . f)
meaning that, although f is a function to lists, we accumulate as if f were a function to Bunches: the role of ala' is to insert the correct pack and unpack operations to make that just happen.
We need (basis !!) :: Int -> [[Int]] to be our translation. Hence basis :: [[[Int]]] is the list of images of our translation, computed on demand at most once each (i.e., the translation, memoized).
For this basis, observe that we need these two infinite lists
[ [] [ [[0]]
, [[]] , [[1]]
, [[],[]] , [[2]]
, [[],[],[]] , [[3]]
... ...
combined Bunchwise. As both lists have the same length (infinity), I could also have written
basis = zipWith (++) (iterate ([]:) []) [[[i]] | i <- [0..]]
but I thought it was worth observing that this also is an example of Bunch structure.
Of course, it's very nice when something like accumArray hands you exactly the sort of accumulation you need, neatly packaging a bunch of grungy behind-the-scenes mutation. But the general recipe for an accumulation is to think "What's the Monoid?" and "What do I do with each element?". That's what foldMap asks you.
The (++) operator copies the left-hand list. For this reason, adding to the beginning of a list is quite fast, but adding to the end of a list is very slow.
In summary, avoid adding things to the end of a list. Try to always add to the beginning instead. One simple way to do that is to build the list backwards, and then reverse it at the end. A more devious trick is to use "difference lists" (Google it). Another possibility is to use Data.Sequence rather than a list.
The first thing that should be noted is the most obvious way to implement this is use a data structure that allows random access, an array is an obviously choice. Note that you need to add the elements to the array multiple times and somehow "join them".
accumArray is perfect for this.
So we get:
f l i = elems $ accumArray (\l e -> e:l) [] (0,i-1) (map (\e -> (e,e)) l)
And we're good to go (see full code here).
This approach does involve converting the final array back into a list, but that step is very likely faster than say sorting the list, which often involves scanning the list at least a few times for a list of decent size.
Whenever you use ++ you have to recreate the entire list, since lists are immutable.
A simple solution would be to use :, but that builds a reversed list. However that can be fixed using reverse, which results in only building two lists (instead of 1 million in your case).
Your concept of glomming things onto an accumulator is a very useful one, and both MathematicalOrchid and Guvante show how you can use that concept reasonably efficiently. But in this case, there is a simpler approach that is likely also faster. You started with
group . sort $ ls
and this was a very good place to start! You get a list that's almost the one you want, except that you need to fill in some blanks. How can we figure those out? The simplest way, though probably not quite the most efficient, is to work with a list of all the numbers you want to count up to: [0 .. len-1].
So we start with
f ls len = g [0 .. len-1] (group . sort $ ls)
where
?
How do we define g? By pattern matching!
f ls len = g [0 .. len-1] (group . sort $ ls)
where
-- We may or may not have some lists left,
-- but we counted as high as we decided we
-- would
g [] _ = []
-- We have no lists left, so the rest of the
-- numbers are not represented
g ns [] = map (const []) ns
-- This shouldn't be possible, because group
-- doesn't make empty lists.
g _ ([]:_) = error "group isn't working!"
-- Finally, we have some work to do!
g (n:ns) xls#(xl#(x:_):xls')
| n == x = xl : g ns xls'
| otherwise = [] : g ns xls
That was nice, but making the list of numbers isn't free, so you might be wondering how you can optimize it. One method I invite you to try is using your original technique of keeping a separate counter, but following this same sort of structure.

How lazy is Haskell's `++`?

I'm curious how I should go about improving the performance of a Haskell routine that finds the lexicographically minimal cyclic rotation of a string.
import Data.List
swapAt n = f . splitAt n where f (a,b) = b++a
minimumrotation x = minimum $ map (\i -> swapAt i x) $ elemIndices (minimum x) x
I'd imagine that I should use Data.Vector rather than lists because Data.Vector provides in-place operations, probably just manipulating some indices into the original data. I shouldn't actually need to bother tracking the indices myself to avoid excess copying, right?
I'm curious how the ++ impact the optimization though. I'd imagine it produces a lazy string thunk that never does the appending until the string gets read that far. Ergo, the a should never actually be appended onto the b whenever minimum can eliminate that string early, like because it begins with some very later letter. Is this correct?
xs ++ ys adds some overhead in all the list cells from xs, but once it reaches the end of xs it's free — it just returns ys.
Looking at the definition of (++) helps to see why:
[] ++ ys = ys
(x:xs) ++ ys = x : (xs ++ ys)
i.e., it has to "re-build" the entire first list as the result is traversed. This article is very helpful for understanding how to reason about lazy code in this way.
The key thing to realise is that appending isn't done all at once; a new linked list is incrementally built by first walking through all of xs, and then putting ys where the [] would go.
So, you don't have to worry about reaching the end of b and suddenly incurring the one-time cost of "appending" a to it; the cost is spread out over all the elements of b.
Vectors are a different matter entirely; they're strict in their structure, so even examining just the first element of xs V.++ ys incurs the entire overhead of allocating a new vector and copying xs and ys to it — just like in a strict language. The same applies to mutable vectors (except that the cost is incurred when you perform the operation, rather than when you force the resulting vector), although I think you'd have to write your own append operation with those anyway. You could represent a bunch of appended (immutable) vectors as [Vector a] or similar if this is a problem for you, but that just moves the overhead to when you flattening it back into a single Vector, and it sounds like you're more interested in mutable vectors.
Try
minimumrotation :: Ord a => [a] -> [a]
minimumrotation xs = minimum . take len . map (take len) $ tails (cycle xs)
where
len = length xs
I expect that to be faster than what you have, though index-juggling on an unboxed Vector or UArray would probably be still faster. But, is it really a bottleneck?
If you're interested in fast concatenation and a fast splitAt, use Data.Sequence.
I've made some stylistic modifications to your code, to make it look more like idiomatic Haskell, but the logic is exactly the same, except for a few conversions to and from Seq:
import qualified Data.Sequence as S
import qualified Data.Foldable as F
minimumRotation :: Ord a => [a] -> [a]
minimumRotation xs = F.toList
. F.minimum
. fmap (`swapAt` xs')
. S.elemIndicesL (F.minimum xs')
$ xs'
where xs' = S.fromList xs
swapAt n = f . S.splitAt n
where f (a,b) = b S.>< a

Merging an unbound number of ordered infinite sequences

I want to generate all natural numbers together with their decomposition in prime factors, up to a certain threshold.
I came up with the following function:
vGenerate :: [a] -- generator set for monoid B* (Kleene star of B)
-> (a, (a -> a -> a)) -- (identity element, generating function)
-> (a -> Bool) -- filter
-> [a] -- B* filtered
vGenerate [] (g0,_) _ = [g0]
vGenerate (e:es) (g0,g) c =
let coEs = vGenerate es (g0,g) c
coE = takeWhile (c) $ iterate (g e) g0
in concatMap (\m -> takeWhile (c) $ map (g m) coE) coEs
gen then generates all natural numbers together with their prime factors:
gen threshold =
let b = map (\x -> (x,[x])) $ takeWhile (<= threshold) primes
condition = (<= threshold) . fst
g0 = (1,[])
g = \(n,nl)(m,ml) -> ((n*m), nl ++ ml)
in vGenerate b (g0,g) condition
primes = [2,3,5,7,11,.. ] -- pseudo code
I have the following questions:
It is not always known in advance how many numbers we will need. Can we modify vGenerate such that it starts with a lazy infinite list of primes, and generates all the factorizations in increasing order? The challenge is that we have an infinite list of primes, for each prime an infinite list of powers of that prime number, and then have to take all possible combinations. The lists are naturally ordered by increasing first element, so they could be generated lazily.
I documented vGenerate in terms of monoid, with the intention to keep it as abstract as possible, but perhaps this just obfuscates the code? I want to generalize it later (more as an exercise than for real usage), e.g. for generating raster points within certain constraints, which can also be put in the monoid context, so I thought it was a good start to get rid of all references to the problem space (in casu: primes). But I feel that the filtering function does not fit well in the abstraction: the generation must happen in an order that is monotonous for the metric tested by c, because recursion is terminated as soon as c is not satisfied. Any advice?
Have a look at mergeAll :: Ord a => [[a]] -> [a] from the data-ordlist package. It merges an unbound number of infinite sequences as long as the sequences are ordered, and the heads of the sequences are ordered. I've used it for similar problems before, for example to generate all numbers of the form 2^i*3^j.
> let numbers = mergeAll [[2^i*3^j | j <- [0..]] | i <- [0..]]
> take 20 numbers
[1,2,3,4,6,8,9,12,16,18,24,27,32,36,48,54,64,72,81,96]
You should be able to extend this to generate all numbers with their factorizations.

Resources