I'm working on learning Haskell, so I tried to implement a moving average function. Here is my code:
mAverage :: Int-> [Int] -> [Float]
mAverage x a = [fromIntegral k / fromIntegral x | k <- rawAverage]
where
rawAverage = mAverage' x a a
-- First list contains original values; second list contains moving average computations
mAverage' :: Int -> [Int] -> [Int] -> [Int]
mAverage' 1 a b = b
mAverage' x a b = mAverage' (x - 1) a' b'
where
a' = init a
b' = zipWith (+) a' (tail b)
where the user calls mAverage with a length for each average and the list of values (e.g. mAverage 4 [1,2..100]).
However, when I run the code on the input mAverage 4 [1,2..100000], I get that it takes 3.6 seconds in ghci (using :set +s) and uses a gigabyte of memory. This seems very inefficient to me, as the equivalent function takes a fraction of a second in Python. Is there some way that I could make my code more efficient?
If you want to learn something new you can take a look at this nice solution for Moving Average problem. It is written by one of my students so I won't claim authorship. I really like it because it's very short. The only problem here is average function. Such functions are known to be bad. Instead you can use Beautiful folds by Gabriel Gonzalez. And yes, this function takes O(k) time (where k is size of window) for calculating average of window (I find it better because you can face floating point errors if you try to add only new element to window and subtract last). Oh, it also uses State monad :)
{-# LANGUAGE UnicodeSyntax #-}
module MovingAverage where
import Control.Monad (forM)
import Control.Monad.State (evalState, gets, modify)
moving :: Fractional a ⇒ Int → [a] → [a]
moving n _ | n <= 0 = error "non-positive argument"
moving n xs = evalState (forM xs $ \x → modify ((x:) . take (n-1)) >> gets average) []
where
average xs = sum xs / fromIntegral n
Here is a straightforward list-based solution which is idiomatic and fast enough, though requires more memory.
import Data.List (tails)
mavg :: Fractional b => Int -> [b] -> [b]
mavg k lst = take (length lst-k) $ map average $ tails lst
where average = (/ fromIntegral k) . sum . take k
This solution allows to use any function instead of average in a moving window.
The following solution is less universal but it is constant in space and seems to be the fastest one.
import Data.List (scanl')
mavg :: Fractional b => Int -> [b] -> [b]
mavg k lst = map (/ fromIntegral k) $ scanl' (+) (sum h) $ zipWith (-) t lst
where (h, t) = splitAt k lst
Finally, the solution which uses a kind of Okasaki's persistent functional queue, to keep the moving window. It does make sense when dealing with streaming data, like conduits or pipes.
mavg k lst = map average $ scanl' enq ([], take k lst) $ drop k lst
where
average (l,r) = (sum l + sum r) / fromIntegral k
enq (l, []) x = enq ([], reverse l) x
enq (l, (_:r)) x = (x:l, r)
And as it was mentioned in comments to the original post, do not use ghci for profiling. For example, you won't be able to see any benefits of scanl' in ghci.
Here's a solution for you.
The idea is to scan two lists, one where the averaging window starts, and another where it ends. Getting a tail end of a list costs as much as scanning the part we're skipping, and we're not copying anything. (If the windows size was usually quite large, we could calculate the remaining_data along with counting the sum initial_data, in one go.)
We generate a list of partial sums as described in my comment, then divide them by the windows width to get averages.
While slidingAverage calculates averages for biased position (window width to the right), centeredSlidingAverage calculates centered averages, using half window width to the left and to the right.
import Data.List (splitAt, replicate)
slidingAverage :: Int -> [Int] -> [Double] -- window size, source list -> list of averages
slidingAverage w xs = map divide $ initial_sum : slidingSum initial_sum xs remaining_data
where
divide = (\n -> (fromIntegral n) / (fromIntegral w)) -- divides the sums by window size
initial_sum = sum initial_data
(initial_data, remaining_data) = splitAt w xs
centeredSlidingAverage :: Int -> [Int] -> [Double] -- window size, source list -> list of averages
centeredSlidingAverage w xs = slidingAverage w $ left_padding ++ xs ++ right_padding
where
left_padding = replicate half_width 0
right_padding = replicate (w - half_width) 0
half_width = (w `quot` 2) -- quot is integer division
slidingSum :: Int -> [Int] -> [Int] -> [Int] -- window_sum before_window after_window -> list of sums
slidingSum _ _ [] = []
slidingSum window_sum before_window after_window = new_sum : slidingSum new_sum new_before new_after
where
value_to_go = head before_window
new_before = tail before_window
value_to_come = head after_window
new_after = tail after_window
new_sum = window_sum - value_to_go + value_to_come
When I try length $ slidingAverage 10 [1..1000000], it takes less than a second on my MBP. Due to the laziness, centeredSlidingAverage takes about the same time.
One simple way of doing it that also uses O(n) complexity
movingAverage :: (Fractional a) => Int -> [a] -> [a]
movingAverage n _ | n <= 0 = error "non-positive argument"
movingAverage n xs = fmap average $ groupBy n xs
where average xs' = sum xs' / fromIntegral (length xs')
groupBy :: Int -> [a] -> [[a]]
groupBy _ [] = []
groupBy n xs = go [] xs
where
go _ [] = []
go l (x:xs') = (x:t) : go (x:l) xs'
where t = take (n-1) l
Another way is to use STUArray.
import Data.Array.Unboxed
import Data.Array.ST
import Data.STRef
import Control.Monad
import Control.Monad.ST
movingAverage :: [Double] -> IO [Double]
movingAverage vals = stToIO $ do
let end = length vals - 1
myArray <- newArray (1, end) 0 :: ST s (STArray s Int Double)
forM_ [1 .. end] $ \i -> do
let cval = vals !! i
let lval = vals !! (i-1)
writeArray myArray i ((cval + lval)/2)
getElems myArray
Related
I'm trying to solve the following problem in Haskell: given an integer return the list of its digits. The constraint is I have to only use one of the fold* functions (* = {r,l,1,l1}).
Without such constraint, the code is simple:
list_digits :: Int -> [Int]
list_digits 0 = []
list_digits n = list_digits r ++ [n-10*r]
where
r = div n 10
But how do I use fold* to, essentially grow a list of digits from an empty list?
Thanks in advance.
Is this a homework assignment? It's pretty strange for the assignment to require you to use foldr, because this is a natural use for unfoldr, not foldr. unfoldr :: (b -> Maybe (a, b)) -> b -> [a] builds a list, whereas foldr :: (a -> b -> b) -> b -> [a] -> b consumes a list. An implementation of this function using foldr would be horribly contorted.
listDigits :: Int -> [Int]
listDigits = unfoldr digRem
where digRem x
| x <= 0 = Nothing
| otherwise = Just (x `mod` 10, x `div` 10)
In the language of imperative programming, this is basically a while loop. Each iteration of the loop appends x `mod` 10 to the output list and passes x `div` 10 to the next iteration. In, say, Python, this'd be written as
def list_digits(x):
output = []
while x > 0:
output.append(x % 10)
x = x // 10
return output
But unfoldr allows us to express the loop at a much higher level. unfoldr captures the pattern of "building a list one item at a time" and makes it explicit. You don't have to think through the sequential behaviour of the loop and realise that the list is being built one element at a time, as you do with the Python code; you just have to know what unfoldr does. Granted, programming with folds and unfolds takes a little getting used to, but it's worth it for the greater expressiveness.
If your assignment is marked by machine and it really does require you to type the word foldr into your program text, (you should ask your teacher why they did that and) you can play a sneaky trick with the following "id[]-as-foldr" function:
obfuscatedId = foldr (:) []
listDigits = obfuscatedId . unfoldr digRem
Though unfoldr is probably what the assignment meant, you can write this using foldr if you use foldr as a hylomorphism, that is, building up one list while it tears another down.
digits :: Int -> [Int]
digits n = snd $ foldr go (n, []) places where
places = replicate num_digits ()
num_digits | n > 0 = 1 + floor (logBase 10 $ fromIntegral n)
| otherwise = 0
go () (n, ds) = let (q,r) = n `quotRem` 10 in (q, r : ds)
Effectively, what we're doing here is using foldr as "map-with-state". We know ahead of time
how many digits we need to output (using log10) just not what those digits are, so we use
unit (()) values as stand-ins for those digits.
If your teacher's a stickler for just having a foldr at the top-level, you can get
away with making go partial:
digits' :: Int -> [Int]
digits' n = foldr go [n] places where
places = replicate num_digits ()
num_digits | n > 0 = floor (logBase 10 $ fromIntegral n)
| otherwise = 0
go () (n:ds) = let (q,r) = n `quotRem` 10 in (q:r:ds)
This has slightly different behaviour on non-positive numbers:
>>> digits 1234567890
[1,2,3,4,5,6,7,8,9,0]
>>> digits' 1234567890
[1,2,3,4,5,6,7,8,9,0]
>>> digits 0
[]
>>> digits' 0
[0]
>>> digits (negate 1234567890)
[]
>>> digits' (negate 1234567890)
[-1234567890]
--Returns last N elements in list
lastN :: Int -> [a] -> [a]
lastN n xs = let m = length xs in drop (m-n) xs
--create contiguous array starting from index b within list a
produceContiguous :: [a] -> Int -> [[a]]
produceContiguous [] _ = [[]]
produceContiguous arr ix = scanl (\acc x -> acc ++ [x]) [arr !! ix] inset
where inset = lastN (length arr - (ix + 1)) arr
--Find maximum sum of all possible contiguous sub arrays, modulo [n]
--d is dummy data
let d = [1,2,3,10,6,3,1,47,10]
let maxResult = maximum $ map (\s -> maximum s) $ map (\c -> map (\ac -> (sum ac )`mod` (last n)) c ) $ map (\n -> produceContiguous d n ) [0..(length d) -1]
I'm a Haskell newb - just a few days into it .. If I'm doing something obviously wrong, whoopsies
You can improve the runtime a lot by observing that map sum (produceContiguous d n) (which has runtime Ω(m^2), m the length of drop n d -- possibly O(m^3) time because you're appending to the end of acc on each iteration) can be collapsed to scanl (+) 0 (drop n d) (which has runtime O(m)). There are plenty of other stylistic changes I would make as well, but that's the main algorithmic one I can think of.
Cleaning up all the stylistic stuff, I would probably write:
import Control.Monad
import Data.List
addMod n x y = (x+y) `mod` n
maxResult n = maximum . (scanl (addMod n) 0 <=< tails)
In ghci:
*Main> jaggedGoofyMax 100 [1..1000]
99
(12.85 secs, 24,038,973,096 bytes)
*Main> dmwitMax 100 [1..1000]
99
(0.42 secs, 291,977,440 bytes)
Not shown here is the version of jaggedGoofyMax that has only the optimization I mentioned in my first paragraph applied, which has slightly better runtime/memory usage stats to dmwitMax when run in ghci (but basically identical to dmwitMax when both are compiled with -O2). So you can see that for even modest input sizes this optimization can make a big difference.
What I am wanting to do is create a list of random integers, with no duplicates. As a first step, I have a function which makes a list of n random samples. How does one write this in a more Haskell idiomatic way, where an empty list does not need to be passed in to start the list off? I am sure I am missing something basic and fundamental.
-- make a list of random integers.
-- takes a size, and an empty list.
-- returns a list of that length of random numbers.
f :: Int -> [Int] -> IO [Int]
f l xs | length xs >= l = return (xs)
f l xs = do
r <- randomRIO (1, 40) :: IO Int
f l $ r : x
Usage:
*Main> f 6 []
[10,27,33,35,31,28]
Ultimately this function will have filtering to check for duplicate insertions, but that is a separate question. Although this may look like homework, it is not, but part of my own attempt to come to grips with the State monad as used for random number generation, and finding I am stuck at a much earlier spot.
Well, you can operate on the output of the recursive call:
f :: Int -> IO [Int]
f 0 = return []
f n = do
r <- randomRIO (1, 40)
xs <- f (n-1)
return $ r : xs
Note however that it's important the the operation you perform on the result is fast. In this case r : xs is constant time. However if you replace the last line with (say):
return $ xs ++ [r]
this would change the complexity of the function from linear to quadratic because every ++ call will have to scan all the sequence of previously generated numbers before appending the new one.
However you could simply do:
f n = sequence $ replicate n (randomRIO (1, 40))
replicate creates a [IO Int] list of length n made of randomRIO actions and sequence takes an [IO a] and turns it into an IO [a] by executing all the actions in order and collecting the results.
Even simpler, you could use replicateM which is already the function you want:
import Control.Monad(replicateM)
f n = replicateM n (randomRIO (1, 40))
or in point-free style:
f :: Int -> IO [Int]
f = flip replicateM $ randomRIO (1, 40)
This uses a Set to keep track of numbers already generated:
import System.Random
import qualified Data.Set as Set
generateUniqueRandoms :: (Int, Int) -> Int -> IO [Int]
generateUniqueRandoms range#(low, high) n =
let maxN = min (high - low) n
in
go maxN Set.empty
where
go 0 _ = return []
go n s = do
r <- getUniqueRandom s
xs <- go (n-1) (Set.insert r s)
return $ r : xs
getUniqueRandom s = do
r <- randomRIO range
if (Set.member r s) then getUniqueRandom s
else return r
Here is some sample output:
Main> generateUniqueRandoms (1, 40) 23
[29,22,2,17,5,8,24,27,10,16,6,3,14,37,25,34,30,28,7,31,15,20,36]
Main> generateUniqueRandoms (1, 40) 1000
[33,35,24,16,13,1,26,7,14,11,15,2,4,30,28,6,32,25,38,22,17,12,20,5,18,40,36,39,27,9,37,31,21,29,8,34,10,23,3]
Main> generateUniqueRandoms (1, 40) 0
[]
However, it is worth noting that if n is close to the width of the range, it'd be much more efficient to shuffle a list of all numbers in the range and take the first n of that.
Forgive me if this is better suited to MathOverflow, but my question is probably too simple to put there.
I'm reading S.K Lando's Lectures on Generating Functions, which gives this definition of the product of two generating functions A and B:
A(s)*B(s) = a0*b0 + (a0*b1 + a1*b0)*s + (a0*b2 + a1*b1 + a2*b0)*s^2 ...
I understand that the s is just formal. But - and I know it's obtuse of me - I can't understand how the pattern of terms combining coefficients should continue. If anyone could just extend the definition to one or two more terms, it would probably help me a lot. Many thanks!
For bonus points, an algorithm in Haskell for multiplying two series (represented as lists of coefficients) would be much appreciated too - but it'd be enough for me just to understand the above definition.
Notice that the sum of the coefficient indices is constant in each term. For example a0*b0 -> 0+0=0, while a0*b1 -> 0+1=1 and a1*b0 -> 1+0=1.
Recall the story of young Gauss, who discovered that by summing a list of consecutive numbers with its reverse, we obtain a list of constants. The same trick applies here. We'll just take the first k a_i and b_i coefficients, reverse the list of b_i coefficients, and take the component-wise product of the lists.
Here's some Haskell code to generate the coefficient of s^i for i>=0, given i and the list of as=[a0,a1,...] and bs=[b0,b1,...]:
genCoeff :: [Double] -> [Double] -> Int -> Double
genCoeff as bs i = sum $ zipWith (*) (take (i+1) as) (reverse (take (i+1) bs))
To generate all of the coefficients, we simply map the partially applied function genCoeff as bs onto the list [0,1,...], i.e.
genAllCeoffs :: [Double] -> [Double] -> [Double]
genAllCoeffs as bs = map (genCoeff as bs) [0..]
Here is a solution which doesn't use reverse:
add [] bs = bs
add as [] = as
add (a:as) (b:bs) = (a+b) : add as bs
mult :: [Int] -> [Int] -> [Int]
mult [] bs = [] -- note: [] = 0
mult as [] = []
mult (a:as) (b:bs) = (a*b) : add (add (map (*a) bs) (map (*b) as)) (0:mult as bs)
test1 = do
let as = [1,2,3]
bs = [4,5]
putStrLn $ "as = " ++ show as
putStrLn $ "bs = " ++ show bs
putStrLn $ "as * bs = " ++ show (mult as bs)
Output:
as = [1,2,3]
bs = [4,5]
as * bs = [4,13,22,15]
It was derived from the following identity:
(a0+a1*x) * (b0 + b1*x) = a0*b0 + a0*b1*x + b0*a1*x + a1*b1*x^2
The correspondences are:
a0*b0 <-> a*b
a0*b1*x <-> map (*a) bs
b0*a1*x <-> map (*b) as
a1*b1*x^2 <-> (0:mult as bs)
I needed an efficient sliding window function in Haskell, so I wrote the following:
windows n xz#(x:xs)
| length v < n = []
| otherwise = v : windows n xs
where
v = take n xz
My problem with this is that I think the complexity is O(n*m) where m is the length of the list and n is the window size. You count down the list once for take, another time for length, and you do it down the list of essentially m-n times. It seems like it can be more efficient than this, but I'm at a loss for how to make it more linear. Any takers?
You can't get better than O(m*n), since this is the size of the output data structure.
But you can avoid checking the lengths of the windows if you reverse the order of operations: First create n shifted lists and then just zip them together. Zipping will get rid of those that don't have enough elements automatically.
import Control.Applicative
import Data.Traversable (sequenceA)
import Data.List (tails)
transpose' :: [[a]] -> [[a]]
transpose' = getZipList . sequenceA . map ZipList
Zipping a list of lists is just a transposition, but unlike transpose from Data.List it throws away outputs that would have less than n elements.
Now it's easy to make the window function: Take m lists, each shifted by 1, and just zip them:
windows :: Int -> [a] -> [[a]]
windows m = transpose' . take m . tails
Works also for infinite lists.
You can use Seq from Data.Sequence, which has O(1) enqueue and dequeue at both ends:
import Data.Foldable (toList)
import qualified Data.Sequence as Seq
import Data.Sequence ((|>))
windows :: Int -> [a] -> [[a]]
windows n0 = go 0 Seq.empty
where
go n s (a:as) | n' < n0 = go n' s' as
| n' == n0 = toList s' : go n' s' as
| otherwise = toList s'' : go n s'' as
where
n' = n + 1 -- O(1)
s' = s |> a -- O(1)
s'' = Seq.drop 1 s' -- O(1)
go _ _ [] = []
Note that if you materialize the entire result your algorithm is necessarily O(N*M) since that is the size of your result. Using Seq just improves performance by a constant factor.
Example use:
>>> windows [1..5]
[[1,2,3],[2,3,4],[3,4,5]]
First let's get the windows without worrying about the short ones at the end:
import Data.List (tails)
windows' :: Int -> [a] -> [[a]]
windows' n = map (take n) . tails
> windows' 3 [1..5]
[[1,2,3],[2,3,4],[3,4,5],[4,5],[5],[]]
Now we want to get rid of the short ones without checking the length of every one.
Since we know they are at the end, we could lose them like this:
windows n xs = take (length xs - n + 1) (windows' n xs)
But that's not great since we still go through xs an extra time to get its length. It also doesn't work on infinite lists, which your original solution did.
Instead let's write a function for using one list as a ruler to measure the amount to take from another:
takeLengthOf :: [a] -> [b] -> [b]
takeLengthOf = zipWith (flip const)
> takeLengthOf ["elements", "get", "ignored"] [1..10]
[1,2,3]
Now we can write this:
windows :: Int -> [a] -> [[a]]
windows n xs = takeLengthOf (drop (n-1) xs) (windows' n xs)
> windows 3 [1..5]
[[1,2,3],[2,3,4],[3,4,5]]
Works on infinite lists too:
> take 5 (windows 3 [1..])
[[1,2,3],[2,3,4],[3,4,5],[4,5,6],[5,6,7]]
As Gabriella Gonzalez says, the time complexity is no better if you want to use the whole result. But if you only use some of the windows, we now manage to avoid doing the work of take and length on the ones you don't use.
If you want O(1) length then why not use a structure that provides O(1) length? Assuming you aren't looking for windows from an infinite list, consider using:
import qualified Data.Vector as V
import Data.Vector (Vector)
import Data.List(unfoldr)
windows :: Int -> [a] -> [[a]]
windows n = map V.toList . unfoldr go . V.fromList
where
go xs | V.length xs < n = Nothing
| otherwise =
let (a,b) = V.splitAt n xs
in Just (a,b)
Conversation of each window from a vector to a list might bite you some, I won't hazard an optimistic guess there, but I will bet that the performance is better than the list-only version.
For the sliding window I also used unboxed Vetors as length, take, drop as well as splitAt are O(1) operations.
The code from Thomas M. DuBuisson is a by n shifted window, not a sliding, except if n =1. Therefore a (++) is missing, however this has a cost of O(n+m). Therefore careful, where you put it.
import qualified Data.Vector.Unboxed as V
import Data.Vector.Unboxed (Vector)
import Data.List
windows :: Int -> Vector Double -> [[Int]]
windows n = (unfoldr go)
where
go !xs | V.length xs < n = Nothing
| otherwise =
let (a,b) = V.splitAt 1 xs
c= (V.toList a ++V.toList (V.take (n-1) b))
in (c,b)
I tried it out with +RTS -sstderr and:
putStrLn $ show (L.sum $ L.concat $ windows 10 (U.fromList $ [1..1000000]))
and got real time 1.051s and 96.9% usage, keeping in mind that after the sliding window two O(m) operations are performed.