Elegant implementation of n-dimensional matrix multiplication using lists? - haskell
List functions allow us to implement arbitrarily-dimensional vector math quite elegantly. For example:
on = (.) . (.)
add = zipWith (+)
sub = zipWith (-)
mul = zipWith (*)
dist = len `on` sub
dot = sum `on` mul
len = sqrt . join dot
And so on.
main = print $ add [1,2,3] [1,1,1] -- [2,3,4]
main = print $ len [1,1,1] -- 1.7320508075688772
main = print $ dot [2,0,0] [2,0,0] -- 4
Of course, this is not the most efficient solution, but is insightful to look at, as one can say map, zipWith and such generalize those vector operations. There is one function I couldn't implement elegantly, though - that is cross products. Since a possible n-dimensional generalization of cross products is the nd matrix determinant, how can I implement matrix multiplication elegantly?
Edit: yes, I asked a completely unrelated question to the problem I set up. Fml.
It just so happens I have some code lying around for doing n-dimensional matrix operations which I thought was quite cute when I wrote it at least:
{-# LANGUAGE NoMonomorphismRestriction #-}
module MultiArray where
import Control.Arrow
import Control.Monad
import Data.Ix
import Data.Maybe
import Data.Array (Array)
import qualified Data.Array as A
-- {{{ from Dmwit.hs
deleteAt n xs = take n xs ++ drop (n + 1) xs
insertAt n x xs = take n xs ++ x : drop n xs
doublify f g xs ys = f (uncurry g) (zip xs ys)
any2 = doublify any
all2 = doublify all
-- }}}
-- makes the most sense when ls and hs have the same length
instance Ix a => Ix [a] where
range = sequence . map range . uncurry zip
inRange = all2 inRange . uncurry zip
rangeSize = product . uncurry (zipWith (curry rangeSize))
index (ls, hs) xs = fst . foldr step (0, 1) $ zip indices sizes where
indices = zipWith index (zip ls hs) xs
sizes = map rangeSize $ zip ls hs
step (i, b) (s, p) = (s + p * i, p * b)
fold :: (Enum i, Ix i) => ([a] -> b) -> Int -> Array [i] a -> Array [i] b
fold f n a = A.array newBound assocs where
(oldLowBound, oldHighBound) = A.bounds a
(newLowBoundBeg , dimLow : newLowBoundEnd ) = splitAt n oldLowBound
(newHighBoundBeg, dimHigh: newHighBoundEnd) = splitAt n oldHighBound
assocs = [(beg ++ end, f [a A.! (beg ++ i : end) | i <- [dimLow..dimHigh]])
| beg <- range (newLowBoundBeg, newHighBoundBeg)
, end <- range (newLowBoundEnd, newHighBoundEnd)
]
newBound = (newLowBoundBeg ++ newLowBoundEnd, newHighBoundBeg ++ newHighBoundEnd)
flatten a = check a >> return value where
check = guard . (1==) . length . fst . A.bounds
value = A.ixmap ((head *** head) . A.bounds $ a) return a
elementWise :: (MonadPlus m, Ix i) => (a -> b -> c) -> Array i a -> Array i b -> m (Array i c)
elementWise f a b = check >> return value where
check = guard $ A.bounds a == A.bounds b
value = A.listArray (A.bounds a) (zipWith f (A.elems a) (A.elems b))
unsafeFlatten a = fromJust $ flatten a
unsafeElementWise f a b = fromJust $ elementWise f a b
matrixMult a b = fold sum 1 $ unsafeElementWise (*) a' b' where
aBounds = (join (***) (!!0)) $ A.bounds a
bBounds = (join (***) (!!1)) $ A.bounds b
a' = copy 2 bBounds a
b' = copy 0 aBounds b
bijection f g a = A.ixmap ((f *** f) . A.bounds $ a) g a
unFlatten = bijection return head
matrixTranspose = bijection reverse reverse
copy n (low, high) a = A.ixmap (newBounds a) (deleteAt n) a where
newBounds = (insertAt n low *** insertAt n high) . A.bounds
The cute bit here is matrixMult, which is one of the only operations that is specialized to two-dimensional arrays. It expands its first argument along one dimension (by putting a copy of the two-dimensional object into each slice of the three-dimensional object); expands its second along another; does pointwise multiplication (now in a three-dimensional array); then collapses the fabricated third dimension by summing. Quite nice.
Related
Haskell: find subsets of list that add up to given number
I am working through some examples and trying to implement a function that counts how many subsets of a list add up to a given number. In trying to rewrite some implementations in python to Haskell : test1 :: [Int] test1 = [2,4,6,10,1,4,5,6,7,8] countSets1 total input = length [n | n <- subsets $ sort input, sum n == total] where subsets [] = [[]] subsets (x:xs) = map (x:) (subsets xs) ++ subsets xs countSets2 total input = go (reverse . sort $ input) total where go [] _ = 0 go (x:xs) t | t == 0 = 1 | t < 0 = 0 | t < x = go xs t | otherwise = go xs (t - x) + go xs t countSets3 total input = go (sort input) total (length input - 1) where go xxs t i | t == 0 = 1 | t < 0 = 0 | i < 0 = 0 | t < (xxs !! i) = go xxs t (i-1) | otherwise = go xxs (t - (xxs !! i)) (i-1) + go xxs t (i-1) I can't figure out why countSets2 does not return the same result as countSets3 (a copy of the python version) λ: countSets1 16 test1 24 λ: countSets2 16 test1 13 λ: countSets3 16 test1 24 EDIT: #freestyle pointed out that the order of my conditions was different in the two solutions: countSets2 total input = go (sortBy (flip compare) input) total where go _ 0 = 1 go [] _ = 0 go (x:xs) t | t < 0 = 0 | t < x = go xs t | otherwise = go xs (t - x) + go xs t fixes the problem.
I'm not sure about your logic, but in your second solution I think you need go [] 0 = 1 otherwise, your code causes go [] 0 = 0 which feels wrong.
I don't treat your error so I don't expect you accept my answer. I only provide a solution: import Math.Combinat.Sets (sublists) getSublists :: [Int] -> Int -> [[Int]] getSublists list total = filter (\x -> sum x == total) (sublists list) countSublists :: [Int] -> Int -> Int countSublists list total = length $ getSublists list total The module Math.Combinat.Sets is from the combinat package. >>> countSublists [2,4,6,10,1,4,5,6,7,8] 16 24
This problem looks similar to a pearl written by Richard Bird on how many sums and products can make 100. I'll use it as a template here. First, the specification: subseqn :: (Num a, Eq a) => a -> [a] -> Int subseqn n = length . filter ((== n) . sum) . subseqs where subseqs = foldr prefix [[]] prefix x xss = map (x:) xss ++ xss Observe that a lot of work may be wasted in subseqs. Intuitively, we can discard candidates as soon as they exceed n, i.e. use the weaker predicate (<= n) somewhere. Trivially, filtering on it before filtering on the stronger one does not change the outcome. Then you can derive filter ((== n) . sum) . subseqs = {- insert weaker predicate -} filter ((== n) . sum) . filter ((<= n) . sum) . subseqs = {- definition of subseqs -} filter ((== n) . sum) . filter ((<= n) . sum) . foldr prefix [[]] = {- fusion law of foldr -} filter ((== n) . sum) . foldr prefix' [[]] The fusion law states that f . foldr g a = foldr h b iff f is strict f a = b f (g x y) = h x (f y) Here, a = b = [[]], f is filter ((<= n) . sum) and g is prefix. You can derive h (i.e. prefix') by observing that the predicate can be applied before prefixing: filter ((<= n) . sum) (prefix x xss) = filter ((<= n) . sum) (prefix x (filter ((<= n) . sum) xss)) which is exactly the third condition; then h is filter ((<= n) . sum) . prefix. Another observation is that sum is computed too many times. To get around that, we can modify our definition of subseqn so that each candidate carries its own sum. Let's use (&&&) :: (a -> b) -> (a -> c) -> a -> (b, c) (&&&) f g x = (f x, g x) and derive filter ((== n) . sum) . subseqs = {- use &&& -} filter ((== n) . snd) . map (id &&& sum) . subseqs = {- definition of subseqs -} filter ((== n) . snd) . map (id &&& sum) . foldr prefix' [[]] = {- fusion law of foldr -} filter ((== n) . snd) . foldr prefix'' [[]] I won't go through the whole derivation of prefix'', it is quite long. The gist is that you can avoid using sum at all by working on pairs, so that the sum is computed iteratively. Initially the sum is 0 for the empty list and all we have to do is add the new candidate to it. We update our base case from [[]] to [([], 0)] and get: prefix'' x = filter ((<= n) . snd) . uncurry zip . (prefix x *** add x) . unzip where (***) :: (a -> a') -> (b -> b') -> (a, b) -> (a', b') (***) f g (x, y) = (f x, g y) add :: Num a => a -> [a] -> [a] add x xs = map (x+) xs ++ xs Here is the final version: subseqn :: (Num a, Ord a) => a -> [a] -> Int subseqn n = length . filter ((== n) . snd) . foldr expand [([], 0)] where expand x = filter ((<= n) . snd) . uncurry zip . (prefix x *** add x) . unzip prefix x xss = map (x:) xss ++ xss add x xs = map (x+) xs ++ xs (*** and &&& are from Control.Arrow)
Haskell generate random numbers with a propriety without explicit recursion
Here I have a function to generate a stream of random numbers between 0 and 999. randomHelp :: RandomGen g => g -> [Int] randomHelp g = zipWith (mod) (map fst $ iterate (next . snd) $ next $ snd $ split g) $ repeat 1000 I would like to select all numbers from the stream defined above and each elem(i) and elem(i + 1) must respect a propriety. For example their gcd have to be one. All I can think is a fold function with because I can start with and accumulator which contains the number 1 (let's assume 1 will be the first element I want to show) then I check the propriety in fold's function and if it is respected i add the element to the accumulator, but the problem is the program blocks because of stackoverflow I think. Here is the function: randomFunc :: RandomGen g => g -> [Int] randomFunc g = foldl (\acc x -> if (gcd x (last acc) == 1) then acc ++ [x] else acc) [1] (randomHelp g) Note: I don't want to use explicit recursion.
A right fold would probably fit better, something like: import System.Random (RandomGen, randomRs, mkStdGen) randomFunc :: RandomGen g => g -> [Int] randomFunc g = foldr go (const []) (randomRs (1, 20) g) 1 where go x f lst = if gcd x lst == 1 then x: f x else f lst then \> take 20 . randomFunc $ mkStdGen 1 [16,7,6,19,8,15,16,1,9,2,15,17,14,3,11,17,15,8,1,5] Doing so you may build the list using : instead of ++ which may cause quadratic performance cost, and you may bypass the call to last.
Nondeterminism for infinite inputs
Using lists to model nondeterminism is problematic if the inputs can take infinitely many values. For example pairs = [ (a,b) | a <- [0..], b <- [0..] ] This will return [(0,1),(0,2),(0,3),...] and never get around to showing you any pair whose first element is not 0. Using the Cantor pairing function to collapse a list of lists into a single list can get around this problem. For example, we can define a bind-like operator that orders its outputs more intelligently by (>>>=) :: [a] -> (a -> [b]) -> [b] as >>>= f = cantor (map f as) cantor :: [[a]] -> [a] cantor xs = go 1 xs where go _ [] = [] go n xs = hs ++ go (n+1) ts where ys = filter (not.null) xs hs = take n $ map head ys ts = mapN n tail ys mapN :: Int -> (a -> a) -> [a] -> [a] mapN _ _ [] = [] mapN n f xs#(h:t) | n <= 0 = xs | otherwise = f h : mapN (n-1) f t If we now wrap this up as a monad, we can enumerate all possible pairs newtype Select a = Select { runSelect :: [a] } instance Monad Select where return a = Select [a] Select as >>= f = Select $ as >>>= (runSelect . f) pairs = runSelect $ do a <- Select [0..] b <- Select [0..] return (a,b) This results in >> take 15 pairs [(0,0),(0,1),(1,0),(0,2),(1,1),(2,0),(0,3),(1,2),(2,1),(3,0),(0,4),(1,3),(2,2),(3,1),(4,0)] which is a much more desirable result. However, if we were to ask for triples instead, the ordering on the outputs isn't as "nice" and it's not even clear to me that all outputs are eventually included -- >> take 15 triples [(0,0,0),(0,0,1),(1,0,0),(0,1,0),(1,0,1),(2,0,0),(0,0,2),(1,1,0),(2,0,1),(3,0,0),(0,1,1),(1,0,2),(2,1,0),(3,0,1),(4,0,0)] Note that (2,0,1) appears before (0,1,1) in the ordering -- my intuition says that a good solution to this problem will order the outputs according to some notion of "size", which could be an explicit input to the algorithm, or could be given implicitly (as in this example, where the "size" of an input is its position in the input lists). When combining inputs, the "size" of a combination should be some function (probably the sum) of the size of the inputs. Is there an elegant solution to this problem that I am missing?
TL;DR: It flattens two dimensions at a time, rather than flattening three at once. You can't tidy this up in the monad because >>= is binary, not ternary etc. I'll assume you defined (>>>=) :: [a] -> (a -> [b]) -> [b] as >>>= f = cantor $ map f as to interleave the list of lists. You like that because it goes diagonally: sums = runSelect $ do a <- Select [0..] b <- Select [0..] return (a+b) gives ghci> take 36 sums [0,1,1,2,2,2,3,3,3,3,4,4,4,4,4,5,5,5,5,5,5,6,6,6,6,6,6,6,7,7,7,7,7,7,7,7] so it's pleasingly keeping the "sizes" in order, but the pattern appears to be broken for triples, and you doubt completeness, but you needn't. It's doing the same trick, but twice, rather than for all three at once: triplePairs = runSelect $ do a <- Select [0..] b <- Select [0..] c <- Select [0..] return $ (a,(b,c)) The second pair is treated as a single source of data, so notice that: ghci> map fst $ take 36 pairs [0,0,1,0,1,2,0,1,2,3,0,1,2,3,4,0,1,2,3,4,5,0,1,2,3,4,5,6,0,1,2,3,4,5,6,7] ghci> map fst $ take 36 triplePairs [0,0,1,0,1,2,0,1,2,3,0,1,2,3,4,0,1,2,3,4,5,0,1,2,3,4,5,6,0,1,2,3,4,5,6,7] and (adding some spaces/newlines for clarity of pattern): ghci> map snd $ take 36 pairs [0, 1,0, 2,1,0, 3,2,1,0, 4,3,2,1,0, 5,4,3,2,1,0, 6,5,4,3,2,1,0, 7,6,5,4,3,2,1,0] ghci> map snd $ take 36 triplePairs [(0,0), (0,1),(0,0), (1,0),(0,1),(0,0), (0,2),(1,0),(0,1),(0,0), (1,1),(0,2),(1,0),(0,1),(0,0), (2,0),(1,1),(0,2),(1,0),(0,1),(0,0), (0,3),(2,0),(1,1),(0,2),(1,0),(0,1),(0,0), (1,2),(0,3),(2,0),(1,1),(0,2),(1,0),(0,1),(0,0)] so you can see it's using exactly the same pattern. This doesn't preserve total sums and it oughtn't because we're getting to three dimensions by flattening two dimensions first before flattening the third in. The pattern is obscured, but it's just as guaranteed to make it to the end of the list. Sadly if you want to do three dimensions in a sum-preserving way, you'll have to write cantor2, cantor3 and cantor4 functions, possibly a cantorN function, but you'll have to ditch the monadic interface, which is inherently based on the bracketing of >>=, hence two-at-a-time flattening of dimensions.
import Control.Applicative import Control.Arrow data Select a = Select [a] | Selects [Select a] instance Functor Select where fmap f (Select x) = Select $ map f x fmap f (Selects xss) = Selects $ map (fmap f) xss instance Applicative Select where pure = Select . (:[]) Select fs <*> xs = Selects $ map (`fmap`xs) fs Selects fs <*> xs = Selects $ map (<*>xs) fs instance Monad Select where return = pure Select xs >>= f = Selects $ map f xs Selects xs >>= f = Selects $ map (>>=f) xs runSelect :: Select a -> [a] runSelect = go 1 where go n xs = uncurry (++) . second (go $ n+1) $ splitOff n xs splitOff n (Select xs) = second Select $ splitAt n xs splitOff n (Selects sls) = (concat hs, Selects $ tsl ++ rl) where ((hs, tsl), rl) = first (unzip . map (splitOff n)) $ splitAt n sls *Select> take 15 . runSelect $ do { a<-Select [0..]; b<-Select [0..]; return (a,b) } [(0,0),(0,1),(1,0),(1,1),(0,2),(1,2),(2,0),(2,1),(2,2),(0,3),(1,3),(2,3),(3,0),(3,1),(3,2)] *Select> take 15 . runSelect $ do { a<-Select [0..]; b<-Select [0..]; c<-Select [0..]; return (a,b,c) } [(0,0,0),(0,0,1),(0,1,0),(0,1,1),(1,0,0),(1,0,1),(1,1,0),(1,1,1),(0,0,2),(0,1,2),(0,2,0),(0,2,1),(0,2,2),(1,0,2),(1,1,2)] Note that this is still not quite Cantor-tuples ((0,1,1) shouldn't come before (1,0,0)), but getting it correct would be possible as well in a similar manner.
A correct multidimentional enumerator could be represented with a temporary state object {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE OverlappingInstances #-} class Space a b where slice :: a -> ([b], a) instance Space [a] a where slice (l:ls) = ([l], ls) slice [] = ([], []) instance (Space sp x) => Space ([sp], [sp]) x where slice (fs, b:bs) = let ss = map slice (b : fs) yield = concat $ map fst ss in (yield, (map snd ss, bs)) Here an N dimensional space is represented by a tuple of lists of N-1 dimensional subspaces that have and haven't been touched by the enumeration. You can then use the following to produce a well ordered list enumerate :: (Space sp x) => sp -> [x] enumerate sp = let (sl, sp') = slice sp in sl ++ enumerate sp' Example in Ideone.
The omega package does exactly what you want and guarantees that every element will be eventually visited: import Control.Applicative import Control.Monad.Omega main = print . take 200 . runOmega $ (,,) <$> each [0..] <*> each [0..] <*> each [0..] Another option would be to use LogicT. It gives more flexibility (if you need) and has operations such as (>>-) that ensure that every combination is eventually encountered. import Control.Applicative import Control.Monad import Control.Monad.Logic -- | Convert a list into any MonadPlus. each :: (MonadPlus m) => [a] -> m a each = msum . map return -- | A fair variant of '(<*>)` that ensures that both branches are explored. (<#>) :: (MonadLogic m) => m (a -> b) -> m a -> m b (<#>) f k = f >>- (\f' -> k >>- (\k' -> return $ f' k')) infixl 4 <#> main = print . observeMany 200 $ (,,) <$> each [0..] <#> each [0..] <#> each [0..]
Data Parallel Haskell Prefix Sum
I'm playing with some Data Parallel Haskell code and found myself in need of a prefix sum. However I didn't see any basic operator in the dph package for prefix sum. I rolled my own, but, since I'm new to dph, I'm not sure if it's properly taking advantage of parallelization: {-# LANGUAGE ParallelArrays #-} {-# OPTIONS_GHC -fvectorise #-} module PrefixSum ( scanP ) where import Data.Array.Parallel (lengthP, indexedP, mapP, zipWithP, concatP, filterP, singletonP, sliceP, (+:+), (!:)) import Data.Array.Parallel.Prelude.Int ((<=), (-), (==), Int, mod) -- hide prelude import qualified Prelude -- assuming zipWithP (a -> b -> c) given -- [:a:] of length n and -- [:b:] of length m, n /= m -- will return -- [:c:] of length min n m scanP :: (a -> a -> a) -> [:a:] -> [:a:] scanP f xs = if lengthP xs <= 1 then xs else head +:+ tail where -- [: x_0, x_2, ..., x_2n :] evens = mapP snd . filterP (even . fst) $ indexedP xs -- [: x_1, x_3 ... :] odds = mapP snd . filterP (odd . fst) $ indexedP xs lenEvens = lengthP evens lenOdds = lengthP odds -- calculate the prefix sums [:w:] of the pair sums [:z:] psums = scanP f $ zipWithP f evens odds -- calculate the total prefix sums as -- [: x_0, w_0, f w_0 x_2, w_1, f w_1 x_4, ..., head = singletonP (evens !: 0) body = concatP . zipWithP (\p e -> [: p, f p e :]) psums $ sliceP 1 lenOdds evens -- ending at either -- ... w_{n-1}, f w_{n-1} x_2n :] -- or -- ... w_{n-1}, f w_{n-1} x_2n, w_n :] -- depending on whether the length of [:x:] is 2n+1 or 2n+2 tail = if lenEvens == lenOdds then body +:+ singletonP (psums !: (lenEvens - 1)) else body -- reimplement some of Prelude so it can be vectorised f $ x = f x infixr 0 $ (.) f g y = f (g y) snd (a,b) = b fst (a,b) = a even n = n `mod` 2 == 0 odd n = n `mod` 2 == 1
Parallel prefix scans are supported, in fact, they're rather fundamental. So just pass (+) as your associative operator.
Recursively sort non-contiguous list to list of contiguous lists
I've been trying to learn a bit of functional programming (with Haskell & Erlang) lately and I'm always amazed at the succinct solutions people can come up with when they can think recursively and know the tools. I want a function to convert a list of sorted, unique, non-contiguous integers into a list of contiguous lists, i.e: [1,2,3,6,7,8,10,11] to: [[1,2,3], [6,7,8], [10,11] This was the best I could come up with in Haskell (two functions):: make_ranges :: [[Int]] -> [Int] -> [[Int]] make_ranges ranges [] = ranges make_ranges [] (x:xs) | null xs = [[x]] | otherwise = make_ranges [[x]] xs make_ranges ranges (x:xs) | (last (last ranges)) + 1 == x = make_ranges ((init ranges) ++ [(last ranges ++ [x])]) xs | otherwise = make_ranges (ranges ++ [[x]]) xs rangify :: [Int] -> [[Int]] rangify lst = make_ranges [] lst It might be a bit subjective but I'd be interested to see a better, more elegant, solution to this in either Erlang or Haskell (other functional languages too but I might not understand it.) Otherwise, points for just fixing my crappy beginner's Haskell style!
Most straightforward way in my mind is a foldr: ranges = foldr step [] where step x [] = [[x]] step x acc#((y:ys):zs) | y == x + 1 = (x:y:ys):zs | otherwise = [x]:acc Or, more concisely: ranges = foldr step [] where step x ((y:ys):zs) | y == x + 1 = (x:y:ys):zs step x acc = [x]:acc But wait, there's more! abstractRanges f = foldr step [] where step x ((y:ys):zs) | f x y = (x:y:ys):zs step x acc = [x]:acc ranges = abstractRanges (\x y -> y == x + 1) powerRanges = abstractRanges (\x y -> y == x*x) -- mighty morphin By turning the guard function into a parameter, you can group more interesting things than just +1 sequences. *Main> powerRanges [1,1,1,2,4,16,3,9,81,5,25] [[1,1,1],[2,4,16],[3,9,81],[5,25]] The utility of this particular function is questionable...but fun!
I can't believe I got the shortest solution. I know this is no code golf, but I think it is still quite readable: import GHC.Exts range xs = map (map fst) $ groupWith snd $ zipWith (\a b -> (a, a-b)) xs [0..] or pointfree range = map (map snd) . groupWith fst . zipWith (\a b -> (b-a, b)) [0..] BTW, groupWith snd can be replaced with groupBy (\a b -> snd a == snd b) if you prefer Data.List over GHC.Exts [Edit] BTW: Is there a nicer way to get rid of the lambda (\a b -> (b-a, b)) than (curry $ (,) <$> ((-) <$> snd <*> fst) <*> snd) ? [Edit 2] Yeah, I forgot (,) is a functor. So here is the obfuscated version: range = map (map fst) . groupWith snd . (flip $ zipWith $ curry $ fmap <$> (-).fst <*> id) [0..] Suggestions are welcome...
import Data.List (groupBy) ranges xs = (map.map) snd . groupBy (const fst) . zip (True : zipWith ((==) . succ) xs (tail xs)) $ xs As to how to come up with such a thing: I started with the zipWith f xs (tail xs), which is a common idiom when you want to do something on consecutive elements of a list. Likewise is zipping up a list with information about the list, and then acting (groupBy) upon it. The rest is plumbing. Then, of course, you can feed it through #pl and get: import Data.List (groupBy) import Control.Monad (ap) import Control.Monad.Instances() ranges = (((map.map) snd) . groupBy (const fst)) .) =<< zip . (True:) . ((zipWith ((==) . succ)) `ap` tail) , which, by my authoritative definition, is evil due to Mondad ((->) a). Twice, even. The data flow is meandering too much to lay it out in any sensible way. zipaptail is an Aztec god, and Aztec gods aren't to be messed with.
Another version in Erlang: part(List) -> part(List,[]). part([H1,H2|T],Acc) when H1 =:= H2 - 1 -> part([H2|T],[H1|Acc]); part([H1|T],Acc) -> [lists:reverse([H1|Acc]) | part(T,[])]; part([],Acc) -> Acc.
k z = map (fst <$>) . groupBy (const snd) . zip z . (False:) . (zipWith ((==) . succ) <*> tail) $ z
Try reusing standard functions. import Data.List (groupBy) rangeify :: (Num a) => [a] -> [[a]] rangeify l = map (map fst) $ groupBy (const snd) $ zip l contigPoints where contigPoints = False : zipWith (==) (map (+1) l) (drop 1 l) Or, following (mixed) advice to use unfoldr, stop abusing groupBy, and be happy using partial functions when it doesn't matter: import Control.Arrow ((***)) import Data.List (unfoldr) spanContig :: (Num a) => [a] -> [[a]] spanContig l = map fst *** map fst $ span (\(a, b) -> a == b + 1) $ zip l (head l - 1 : l) rangeify :: (Num a) => [a] -> [[a]] rangeify = unfoldr $ \l -> if null l then Nothing else Just $ spanContig l
Erlang using foldr: ranges(List) -> lists:foldr(fun (X, [[Y | Ys], Acc]) when Y == X + 1 -> [[X, Y | Ys], Acc]; (X, Acc) -> [[X] | Acc] end, [], List).
This is my v0.1 and I can probably make it better: makeCont :: [Int] -> [[Int]] makeCont [] = [] makeCont [a] = [[a]] makeCont (a:b:xs) = if b - a == 1 then (a : head next) : tail next else [a] : next where next :: [[Int]] next = makeCont (b:xs) And I will try and make it better. Edits coming I think.
As a comparison, here's an implementation in Erlang: partition(L) -> [lists:reverse(T) || T <- lists:reverse(partition(L, {[], []}))]. partition([E|L], {R, [EL|_] = T}) when E == EL + 1 -> partition(L, {R, [E|T]}); partition([E|L], {R, []}) -> partition(L, {R, [E]}); partition([E|L], {R, T}) -> partition(L, {[T|R], [E]}); partition([], {R, []}) -> R; partition([], {R, T}) -> [T|R].
The standard paramorphism recursion scheme isn't in Haskell's Data.List module, though I think it should be. Here's a solution using a paramorphism, because you are building a list-of-lists from a list, the cons-ing is a little tricksy: contig :: (Eq a, Num a) => [a] -> [[a]] contig = para phi [] where phi x ((y:_),(a:acc)) | x + 1 == y = (x:a):acc phi x (_, acc) = [x]:acc Paramorphism is general recursion or a fold with lookahead: para :: (a -> ([a], b) -> b) -> b -> [a] -> b para phi b [] = b para phi b (x:xs) = phi x (xs, para phi b xs)
It can be pretty clear and simple in the Erlang: partition([]) -> []; partition([A|T]) -> partition(T, [A]). partition([A|T], [B|_]=R) when A =:= B+1 -> partition(T, [A|R]); partition(L, P) -> [lists:reverse(P)|partition(L)]. Edit: Just for curiosity I have compared mine and Lukas's version and mine seems about 10% faster either in native either in bytecode version on testing set what I generated by lists:usort([random:uniform(1000000)||_<-lists:seq(1,1000000)]) on R14B01 64b version at mine notebook. (Testing set is 669462 long and has been partitioned to 232451 sublists.) Edit2: Another test data lists:usort([random:uniform(1000000)||_<-lists:seq(1,10000000)]), length 999963 and 38 partitions makes bigger diference in native code. Mine version finish in less than half of time. Bytecode version is only about 20% faster. Edit3: Some microoptimizations which provides additional performance but leads to more ugly and less maintainable code: part4([]) -> []; part4([A|T]) -> part4(T, A, []). part4([A|T], B, R) when A =:= B+1 -> part4(T, A, [B|R]); part4([A|T], B, []) -> [[B]|part4(T, A, [])]; part4([A|T], B, R) -> [lists:reverse(R, [B])|part4(T, A, [])]; part4([], B, R) -> [lists:reverse(R,[B])].
Here's an attempt from a haskell noob ranges ls = let (a, r) = foldl (\(r, a#(h:t)) e -> if h + 1 == e then (r, e:a) else (a:r, [e])) ([], [head ls]) (tail ls) in reverse . map reverse $ r : a