Nondeterminism for infinite inputs - haskell

Using lists to model nondeterminism is problematic if the inputs can take infinitely many values. For example
pairs = [ (a,b) | a <- [0..], b <- [0..] ]
This will return [(0,1),(0,2),(0,3),...] and never get around to showing you any pair whose first element is not 0.
Using the Cantor pairing function to collapse a list of lists into a single list can get around this problem. For example, we can define a bind-like operator that orders its outputs more intelligently by
(>>>=) :: [a] -> (a -> [b]) -> [b]
as >>>= f = cantor (map f as)
cantor :: [[a]] -> [a]
cantor xs = go 1 xs
where
go _ [] = []
go n xs = hs ++ go (n+1) ts
where
ys = filter (not.null) xs
hs = take n $ map head ys
ts = mapN n tail ys
mapN :: Int -> (a -> a) -> [a] -> [a]
mapN _ _ [] = []
mapN n f xs#(h:t)
| n <= 0 = xs
| otherwise = f h : mapN (n-1) f t
If we now wrap this up as a monad, we can enumerate all possible pairs
newtype Select a = Select { runSelect :: [a] }
instance Monad Select where
return a = Select [a]
Select as >>= f = Select $ as >>>= (runSelect . f)
pairs = runSelect $ do
a <- Select [0..]
b <- Select [0..]
return (a,b)
This results in
>> take 15 pairs
[(0,0),(0,1),(1,0),(0,2),(1,1),(2,0),(0,3),(1,2),(2,1),(3,0),(0,4),(1,3),(2,2),(3,1),(4,0)]
which is a much more desirable result. However, if we were to ask for triples instead, the ordering on the outputs isn't as "nice" and it's not even clear to me that all outputs are eventually included --
>> take 15 triples
[(0,0,0),(0,0,1),(1,0,0),(0,1,0),(1,0,1),(2,0,0),(0,0,2),(1,1,0),(2,0,1),(3,0,0),(0,1,1),(1,0,2),(2,1,0),(3,0,1),(4,0,0)]
Note that (2,0,1) appears before (0,1,1) in the ordering -- my intuition says that a good solution to this problem will order the outputs according to some notion of "size", which could be an explicit input to the algorithm, or could be given implicitly (as in this example, where the "size" of an input is its position in the input lists). When combining inputs, the "size" of a combination should be some function (probably the sum) of the size of the inputs.
Is there an elegant solution to this problem that I am missing?

TL;DR: It flattens two dimensions at a time, rather than flattening three at once. You can't tidy this up in the monad because >>= is binary, not ternary etc.
I'll assume you defined
(>>>=) :: [a] -> (a -> [b]) -> [b]
as >>>= f = cantor $ map f as
to interleave the list of lists.
You like that because it goes diagonally:
sums = runSelect $ do
a <- Select [0..]
b <- Select [0..]
return (a+b)
gives
ghci> take 36 sums
[0,1,1,2,2,2,3,3,3,3,4,4,4,4,4,5,5,5,5,5,5,6,6,6,6,6,6,6,7,7,7,7,7,7,7,7]
so it's pleasingly keeping the "sizes" in order, but the pattern appears to be broken for triples, and you doubt completeness, but you needn't. It's doing the same trick, but twice, rather than for all three at once:
triplePairs = runSelect $ do
a <- Select [0..]
b <- Select [0..]
c <- Select [0..]
return $ (a,(b,c))
The second pair is treated as a single source of data, so notice that:
ghci> map fst $ take 36 pairs
[0,0,1,0,1,2,0,1,2,3,0,1,2,3,4,0,1,2,3,4,5,0,1,2,3,4,5,6,0,1,2,3,4,5,6,7]
ghci> map fst $ take 36 triplePairs
[0,0,1,0,1,2,0,1,2,3,0,1,2,3,4,0,1,2,3,4,5,0,1,2,3,4,5,6,0,1,2,3,4,5,6,7]
and (adding some spaces/newlines for clarity of pattern):
ghci> map snd $ take 36 pairs
[0, 1,0, 2,1,0, 3,2,1,0, 4,3,2,1,0, 5,4,3,2,1,0, 6,5,4,3,2,1,0, 7,6,5,4,3,2,1,0]
ghci> map snd $ take 36 triplePairs
[(0,0), (0,1),(0,0), (1,0),(0,1),(0,0), (0,2),(1,0),(0,1),(0,0),
(1,1),(0,2),(1,0),(0,1),(0,0),
(2,0),(1,1),(0,2),(1,0),(0,1),(0,0),
(0,3),(2,0),(1,1),(0,2),(1,0),(0,1),(0,0),
(1,2),(0,3),(2,0),(1,1),(0,2),(1,0),(0,1),(0,0)]
so you can see it's using exactly the same pattern. This doesn't preserve total sums and it oughtn't because we're getting to three dimensions by flattening two dimensions first before flattening the third in. The pattern is obscured, but it's just as guaranteed to make it to the end of the list.
Sadly if you want to do three dimensions in a sum-preserving way, you'll have to write cantor2, cantor3 and cantor4 functions, possibly a cantorN function, but you'll have to ditch the monadic interface, which is inherently based on the bracketing of >>=, hence two-at-a-time flattening of dimensions.

import Control.Applicative
import Control.Arrow
data Select a = Select [a]
| Selects [Select a]
instance Functor Select where
fmap f (Select x) = Select $ map f x
fmap f (Selects xss) = Selects $ map (fmap f) xss
instance Applicative Select where
pure = Select . (:[])
Select fs <*> xs = Selects $ map (`fmap`xs) fs
Selects fs <*> xs = Selects $ map (<*>xs) fs
instance Monad Select where
return = pure
Select xs >>= f = Selects $ map f xs
Selects xs >>= f = Selects $ map (>>=f) xs
runSelect :: Select a -> [a]
runSelect = go 1
where go n xs = uncurry (++) . second (go $ n+1) $ splitOff n xs
splitOff n (Select xs) = second Select $ splitAt n xs
splitOff n (Selects sls) = (concat hs, Selects $ tsl ++ rl)
where ((hs, tsl), rl) = first (unzip . map (splitOff n)) $ splitAt n sls
*Select> take 15 . runSelect $ do { a<‌-Select [0..]; b<‌-Select [0..]; return (a,b) }
[(0,0),(0,1),(1,0),(1,1),(0,2),(1,2),(2,0),(2,1),(2,2),(0,3),(1,3),(2,3),(3,0),(3,1),(3,2)]
*Select> take 15 . runSelect $ do { a<‌-Select [0..]; b<‌-Select [0..]; c<‌-Select [0..]; return (a,b,c) }
[(0,0,0),(0,0,1),(0,1,0),(0,1,1),(1,0,0),(1,0,1),(1,1,0),(1,1,1),(0,0,2),(0,1,2),(0,2,0),(0,2,1),(0,2,2),(1,0,2),(1,1,2)]
Note that this is still not quite Cantor-tuples ((0,1,1) shouldn't come before (1,0,0)), but getting it correct would be possible as well in a similar manner.

A correct multidimentional enumerator could be represented with a temporary state object
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE OverlappingInstances #-}
class Space a b where
slice :: a -> ([b], a)
instance Space [a] a where
slice (l:ls) = ([l], ls)
slice [] = ([], [])
instance (Space sp x) => Space ([sp], [sp]) x where
slice (fs, b:bs) = let
ss = map slice (b : fs)
yield = concat $ map fst ss
in (yield, (map snd ss, bs))
Here an N dimensional space is represented by a tuple of lists of N-1 dimensional subspaces that have and haven't been touched by the enumeration.
You can then use the following to produce a well ordered list
enumerate :: (Space sp x) => sp -> [x]
enumerate sp = let (sl, sp') = slice sp
in sl ++ enumerate sp'
Example in Ideone.

The omega package does exactly what you want and guarantees that every element will be eventually visited:
import Control.Applicative
import Control.Monad.Omega
main = print . take 200 . runOmega $
(,,) <$> each [0..] <*> each [0..] <*> each [0..]
Another option would be to use LogicT. It gives more flexibility (if you need) and has operations such as (>>-) that ensure that every combination is eventually encountered.
import Control.Applicative
import Control.Monad
import Control.Monad.Logic
-- | Convert a list into any MonadPlus.
each :: (MonadPlus m) => [a] -> m a
each = msum . map return
-- | A fair variant of '(<*>)` that ensures that both branches are explored.
(<#>) :: (MonadLogic m) => m (a -> b) -> m a -> m b
(<#>) f k = f >>- (\f' -> k >>- (\k' -> return $ f' k'))
infixl 4 <#>
main = print . observeMany 200 $
(,,) <$> each [0..] <#> each [0..] <#> each [0..]

Related

Given a list, how can I perform some transformation only on sub-lists whose each two elements satisfy a binary predicate?

(In my actual use case I have a list of type [SomeType], SomeType having a finite number of constructors, all nullary; in the following I'll use String instead of [SomeType] and use only 4 Chars, to simplify a bit.)
I have a list like this "aaassddddfaaaffddsssadddssdffsdf" where each element can be one of 'a', 's', 'd', 'f', and I want to do some further processing on each contiguous sequence of non-as, let's say turning them upper case and reversing the sequence, thus obtaining "aaaFDDDDSSaaaSSSDDFFaFDSFFDSSDDD". (I've added the reversing requirement to make it clear that the processing involves all the contiguous non 'a'-s at the same time.)
To turn each sub-String upper case, I can use this:
func :: String -> String
func = reverse . map Data.Char.toUpper
But how do I run that func only on the sub-Strings of non-'a's?
My first thought is that Data.List.groupBy can be useful, and the overall solution could be:
concat $ map (\x -> if head x == 'a' then x else func x)
$ Data.List.groupBy ((==) `on` (== 'a')) "aaassddddfaaaffddsssadddssdffsdf"
This solution, however, does not convince me, as I'm using == 'a' both when grouping (which to me seems good and unavoidable) and when deciding whether I should turn a group upper case.
I'm looking for advices on how I can accomplish this small task in the best way.
You could classify the list elements by the predicate before grouping. Note that I’ve reversed the sense of the predicate to indicate which elements are subject to the transformation, rather than which elements are preserved.
{-# LANGUAGE ScopedTypeVariables #-}
import Control.Arrow ((&&&))
import Data.Function (on)
import Data.Monoid (First(..))
mapSegmentsWhere
:: forall a. (a -> Bool) -> ([a] -> [a]) -> [a] -> [a]
mapSegmentsWhere p f
= concatMap (applyMatching . sequenceA) -- [a]
. groupBy ((==) `on` fst) -- [[(First Bool, a)]]
. map (First . Just . p &&& id) -- [(First Bool, a)]
where
applyMatching :: (First Bool, [a]) -> [a]
applyMatching (First (Just matching), xs)
= applyIf matching f xs
applyIf :: forall a. Bool -> (a -> a) -> a -> a
applyIf condition f
| condition = f
| otherwise = id
Example use:
> mapSegmentsWhere (/= 'a') (reverse . map toUpper) "aaassddddfaaaffddsssadddssdffsdf"
"aaaFDDDDSSaaaSSSDDFFaFDSFFDSSDDD"
Here I use the First monoid with sequenceA to merge the lists of adjacent matching elements from [(Bool, a)] to (Bool, [a]), but you could just as well use something like map (fst . head &&& map snd). You can also skip the ScopedTypeVariables if you don’t want to write the type signatures; I just included them for clarity.
If we need to remember the difference between the 'a's and the rest, let's put them in different branches of an Either. In fact, let's define a newtype now that we are at it:
{-# LANGUAGE DeriveFoldable #-}
{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE ViewPatterns #-}
import Data.Bifoldable
import Data.Char
import Data.List
newtype Bunched a b = Bunched [Either a b] deriving (Functor, Foldable)
instance Bifunctor Bunched where
bimap f g (Bunched b) = Bunched (fmap (bimap f g) b)
instance Bifoldable Bunched where
bifoldMap f g (Bunched b) = mconcat (fmap (bifoldMap f g) b)
fmap will let us work over the non-separators. fold will return the concatenation of the non-separators, bifold will return the concatenation of everything. Of course, we could have defined separate functions unrelated to Foldable and Bifoldable, but why avoid already existing abstractions?
To split the list, we can use an unfoldr that alternately searches for as and non-as with the span function:
splitty :: Char -> String -> Bunched String String
splitty c str = Bunched $ unfoldr step (True, str)
where
step (_, []) = Nothing
step (True, span (== c) -> (as, ys)) = Just (Left as, (False, ys))
step (False, span (/= c) -> (xs, ys)) = Just (Right xs, (True, ys))
Putting it to work:
ghci> bifold . fmap func . splitty 'a' $ "aaassddddfaaaffddsssadddssdffsdf"
"aaaFDDDDSSaaaSSSDDFFaFDSFFDSSDDD"
Note: Bunched is actually the same as Tannen [] Either from the bifunctors package, if you don't mind the extra dependency.
There are other answers here, but I think they get too excited about iteration abstractions. A manual recursion, alternately taking things that match the predicate and things that don't, makes this problem exquisitely simple:
onRuns :: Monoid m => (a -> Bool) -> ([a] -> m) -> ([a] -> m) -> [a] -> m
onRuns p = go p (not . p) where
go _ _ _ _ [] = mempty
go p p' f f' xs = case span p xs of
(ts, rest) -> f ts `mappend` go p' p f' f rest
Try it out in ghci:
Data.Char> onRuns ('a'==) id (reverse . map toUpper) "aaassddddfaaaffddsssadddssdffsdf"
"aaaFDDDDSSaaaSSSDDFFaFDSFFDSSDDD"
Here is a simple solution - function process below - that only requires that you define two functions isSpecial and func. Given a constructor from your type SomeType, isSpecial determines whether it is one of those constructors that form a special sublist or not. The function func is the one you included in your question; it defines what should happen with the special sublists.
The code below is for character lists. Just change isSpecial and func to make it work for your lists of constructors.
isSpecial c = c /= 'a'
func = reverse . map toUpper
turn = map (\x -> ([x], isSpecial x))
amalgamate [] = []
amalgamate [x] = [x]
amalgamate ((xs, xflag) : (ys, yflag) : rest)
| xflag /= yflag = (xs, xflag) : amalgamate ((ys, yflag) : rest)
| otherwise = amalgamate ((xs++ys, xflag) : rest)
work = map (\(xs, flag) -> if flag then func xs else xs)
process = concat . work . amalgamate . turn
Let's try it on your example:
*Main> process "aaassddddfaaaffddsssadddssdffsdf"
"aaaFDDDDSSaaaSSSDDFFaFDSFFDSSDDD"
*Main>
Applying one function at a time, shows the intermediate steps taken:
*Main> turn "aaassddddfaaaffddsssadddssdffsdf"
[("a",False),("a",False),("a",False),("s",True),("s",True),("d",True),
("d",True),("d",True),("d",True),("f",True),("a",False),("a",False),
("a",False),("f",True),("f",True),("d",True),("d",True),("s",True),
("s",True),("s",True),("a",False),("d",True),("d",True),("d",True),
("s",True),("s",True),("d",True),("f",True),("f",True),("s",True),
("d",True),("f",True)]
*Main> amalgamate it
[("aaa",False),("ssddddf",True),("aaa",False),("ffddsss",True),
("a",False),("dddssdffsdf",True)]
*Main> work it
["aaa","FDDDDSS","aaa","SSSDDFF","a","FDSFFDSSDDD"]
*Main> concat it
"aaaFDDDDSSaaaSSSDDFFaFDSFFDSSDDD"
*Main>
We can just do what you describe, step by step, getting a clear simple minimal code which we can easily read and understand later on:
foo :: (a -> Bool) -> ([a] -> [a]) -> [a] -> [a]
foo p f xs = [ a
| g <- groupBy ((==) `on` fst)
[(p x, x) | x <- xs] -- [ (True, 'a'), ... ]
, let (t:_, as) = unzip g -- ( [True, ...], "aaa" )
, a <- if t then as else (f as) ] -- final concat
-- unzip :: [(b, a)] -> ([b], [a])
We break the list into same-p spans and unpack each group with the help of unzip. Trying it out:
> foo (=='a') reverse "aaabcdeaa"
"aaaedcbaa"
So no, using == 'a' is avoidable and hence not especially good, introducing an unnecessary constraint on your data type when all we need is equality on Booleans.

Removing duplicate elements in a Seq

wondering how to implement nub over a Seq a
I get that one could do:
nubSeq :: Seq a -> Seq a
nubSeq = fromList . nub . toList
Just wondering is there something standard that does not convert to Lists in order to call nub :: [a]->[a]?
An implementation that occurred to me, based obviously on nub, is:
nubSeq :: (Eq a) => Seq a -> Seq a
nubSeq = Data.Sequence.foldrWithIndex
(\_ x a -> case x `Data.Sequence.elemIndexR` a of
Just _ -> a
Nothing -> a |> x) Data.Sequence.empty
But there must be something more elegant?
thanks.
Not sure whether this qualifies as more elegant but it splits the concerns in independent functions (caveat: you need an Ord constraint on a):
seqToNubMap takes a Seq and outputs a Map associating to each a the smallest index at which it appeared in the sequence
mapToList takes a Map of values and positions and produces a list of values in increasing order according to the specified positions
nubSeq combines these to generate a sequence without duplicates
The whole thing should be O(n*log(n)), I believe:
module NubSeq where
import Data.Map as Map
import Data.List as List
import Data.Sequence as Seq
import Data.Function
seqToNubMap :: Ord a => Seq a -> Map a Int
seqToNubMap = foldlWithIndex (\ m k v -> insertWith min v k m) Map.empty
mapToList :: Ord a => Map a Int -> [a]
mapToList = fmap fst . List.sortBy (compare `on` snd) . Map.toList
nubSeq :: Ord a => Seq a -> Seq a
nubSeq = Seq.fromList . mapToList . seqToNubMap
Or a simpler alternative following #DavidFletcher's comment:
nubSeq' :: forall a. Ord a => Seq a -> Seq a
nubSeq' xs = Fold.foldr cons nil xs Set.empty where
cons :: a -> (Set a -> Seq a) -> (Set a -> Seq a)
cons x xs seen
| x `elem` seen = xs seen
| otherwise = x <| xs (Set.insert x seen)
nil :: Set a -> Seq a
nil _ = Seq.empty
Another way with an Ord constraint - use a scan to make the sets of
elements that appear in each prefix of the list. Then we can filter out
any element that's already been seen.
import Data.Sequence as Seq
import Data.Set as Set
nubSeq :: Ord a => Seq a -> Seq a
nubSeq xs = (fmap fst . Seq.filter (uncurry notElem)) (Seq.zip xs seens)
where
seens = Seq.scanl (flip Set.insert) Set.empty xs
Or roughly the same thing as a mapAccumL:
nubSeq' :: Ord a => Seq a -> Seq a
nubSeq' = fmap fst . Seq.filter snd . snd . mapAccumL f Set.empty
where
f s x = (Set.insert x s, (x, x `notElem` s))
(If I was using lists I would use Maybes instead of the pairs with
Bool, then use catMaybes instead of filtering. There doesn't seem to be catMaybes
for Sequence though.)
I think your code should be pretty efficient. Since Sequences are tree data structures using another tree type data structure like Map or HashMap to store and lookup the previous items doesn't make too much sense to me.
Instead i take the first item and check it's existence in the rest. If exists i drop that item and proceed the same with the rest recursively. If not then construct a new sequence with first element is the unique element and the rest is the result of nubSeq fed by the rest. Should be typical. I use ViewPatterns.
{-# LANGUAGE ViewPatterns #-}
import Data.Sequence as Seq
nubSeq :: Eq a => Seq a -> Seq a
nubSeq (viewl -> EmptyL) = empty
nubSeq (viewl -> (x :< xs)) | elemIndexL x xs == Nothing = x <| nubSeq xs
| otherwise = nubSeq xs
*Main> nubSeq . fromList $ [1,2,3,4,4,2,3,6,7,1,2,3,4]
fromList [6,7,1,2,3,4]

Haskell generate random numbers with a propriety without explicit recursion

Here I have a function to generate a stream of random numbers between 0 and 999.
randomHelp :: RandomGen g => g -> [Int]
randomHelp g = zipWith (mod) (map fst $ iterate (next . snd) $ next $ snd $ split g) $ repeat 1000
I would like to select all numbers from the stream defined above and each elem(i) and elem(i + 1) must respect a propriety. For example their gcd have to be one. All I can think is a fold function with because I can start with and accumulator which contains the number 1 (let's assume 1 will be the first element I want to show) then I check the propriety in fold's function and if it is respected i add the element to the accumulator, but the problem is the program blocks because of stackoverflow I think.
Here is the function:
randomFunc :: RandomGen g => g -> [Int]
randomFunc g = foldl (\acc x -> if (gcd x (last acc) == 1) then acc ++ [x] else acc) [1] (randomHelp g)
Note: I don't want to use explicit recursion.
A right fold would probably fit better, something like:
import System.Random (RandomGen, randomRs, mkStdGen)
randomFunc :: RandomGen g => g -> [Int]
randomFunc g = foldr go (const []) (randomRs (1, 20) g) 1
where go x f lst = if gcd x lst == 1 then x: f x else f lst
then
\> take 20 . randomFunc $ mkStdGen 1
[16,7,6,19,8,15,16,1,9,2,15,17,14,3,11,17,15,8,1,5]
Doing so you may build the list using : instead of ++ which may cause quadratic performance cost, and you may bypass the call to last.

Elegant implementation of n-dimensional matrix multiplication using lists?

List functions allow us to implement arbitrarily-dimensional vector math quite elegantly. For example:
on = (.) . (.)
add = zipWith (+)
sub = zipWith (-)
mul = zipWith (*)
dist = len `on` sub
dot = sum `on` mul
len = sqrt . join dot
And so on.
main = print $ add [1,2,3] [1,1,1] -- [2,3,4]
main = print $ len [1,1,1] -- 1.7320508075688772
main = print $ dot [2,0,0] [2,0,0] -- 4
Of course, this is not the most efficient solution, but is insightful to look at, as one can say map, zipWith and such generalize those vector operations. There is one function I couldn't implement elegantly, though - that is cross products. Since a possible n-dimensional generalization of cross products is the nd matrix determinant, how can I implement matrix multiplication elegantly?
Edit: yes, I asked a completely unrelated question to the problem I set up. Fml.
It just so happens I have some code lying around for doing n-dimensional matrix operations which I thought was quite cute when I wrote it at least:
{-# LANGUAGE NoMonomorphismRestriction #-}
module MultiArray where
import Control.Arrow
import Control.Monad
import Data.Ix
import Data.Maybe
import Data.Array (Array)
import qualified Data.Array as A
-- {{{ from Dmwit.hs
deleteAt n xs = take n xs ++ drop (n + 1) xs
insertAt n x xs = take n xs ++ x : drop n xs
doublify f g xs ys = f (uncurry g) (zip xs ys)
any2 = doublify any
all2 = doublify all
-- }}}
-- makes the most sense when ls and hs have the same length
instance Ix a => Ix [a] where
range = sequence . map range . uncurry zip
inRange = all2 inRange . uncurry zip
rangeSize = product . uncurry (zipWith (curry rangeSize))
index (ls, hs) xs = fst . foldr step (0, 1) $ zip indices sizes where
indices = zipWith index (zip ls hs) xs
sizes = map rangeSize $ zip ls hs
step (i, b) (s, p) = (s + p * i, p * b)
fold :: (Enum i, Ix i) => ([a] -> b) -> Int -> Array [i] a -> Array [i] b
fold f n a = A.array newBound assocs where
(oldLowBound, oldHighBound) = A.bounds a
(newLowBoundBeg , dimLow : newLowBoundEnd ) = splitAt n oldLowBound
(newHighBoundBeg, dimHigh: newHighBoundEnd) = splitAt n oldHighBound
assocs = [(beg ++ end, f [a A.! (beg ++ i : end) | i <- [dimLow..dimHigh]])
| beg <- range (newLowBoundBeg, newHighBoundBeg)
, end <- range (newLowBoundEnd, newHighBoundEnd)
]
newBound = (newLowBoundBeg ++ newLowBoundEnd, newHighBoundBeg ++ newHighBoundEnd)
flatten a = check a >> return value where
check = guard . (1==) . length . fst . A.bounds
value = A.ixmap ((head *** head) . A.bounds $ a) return a
elementWise :: (MonadPlus m, Ix i) => (a -> b -> c) -> Array i a -> Array i b -> m (Array i c)
elementWise f a b = check >> return value where
check = guard $ A.bounds a == A.bounds b
value = A.listArray (A.bounds a) (zipWith f (A.elems a) (A.elems b))
unsafeFlatten a = fromJust $ flatten a
unsafeElementWise f a b = fromJust $ elementWise f a b
matrixMult a b = fold sum 1 $ unsafeElementWise (*) a' b' where
aBounds = (join (***) (!!0)) $ A.bounds a
bBounds = (join (***) (!!1)) $ A.bounds b
a' = copy 2 bBounds a
b' = copy 0 aBounds b
bijection f g a = A.ixmap ((f *** f) . A.bounds $ a) g a
unFlatten = bijection return head
matrixTranspose = bijection reverse reverse
copy n (low, high) a = A.ixmap (newBounds a) (deleteAt n) a where
newBounds = (insertAt n low *** insertAt n high) . A.bounds
The cute bit here is matrixMult, which is one of the only operations that is specialized to two-dimensional arrays. It expands its first argument along one dimension (by putting a copy of the two-dimensional object into each slice of the three-dimensional object); expands its second along another; does pointwise multiplication (now in a three-dimensional array); then collapses the fabricated third dimension by summing. Quite nice.

Is there a non-recursive way to write this "paginate" function?

The following function is not in the standard Haskell libraries, so I wrote it:
paginate _ [] = []
paginate n xs = let (h, t) = splitAt n xs in h : paginate n t
Then I wanted to rewrite it without the explicit recursion yet still be easily understandable.
I've tried the fix version, which does not yield very satisfactory results:
paginate = fix go
where
go _ _ [] = []
go v n xs = let (h, t) = splitAt n xs in h : v n t
Are there simpler solutions which use the Prelude functions?
When I see this sort of problem, I like to think about the "shape" of the function you want. In this case, you're starting with a list and producing a list of lists. This is a special case of starting with a single value and producing a list, which corresponds to an unfold. So, if you don't mind importing Data.List, you can neatly write your function using unfoldr.
Lets take a look at how to derive this step by step. First, as above, you just have to know about unfoldr and see that it can apply. This is where a bit of experience (like reading answers like this one :)) comes in.
Next, we take a look at the type of unfoldr:
Prelude Data.List> :t unfoldr
unfoldr :: (b -> Maybe (a, b)) -> b -> [a]
The idea is that we start with a single "kernel" value (b), and apply a step function (b -> Maybe (a, b)) repeatedly until we hit Nothing. Our starting value is the list we want to separate into chunks, so we don't need to do any more processing there. This means the last step we have is implementing the step function.
Since we're starting with a list of values, we can replace b with [n]; moreover, since we want to produce a list of lists at the very end, we can replace [a] with [[n]] and, therefore, a with [n]. This gives us the final type we have to implement for the step function:
[n] -> Maybe ([n], [n])
Hey, that looks very similar to the type of splitAt!
splitAt :: Int -> a -> ([a], [a])
In fact, we can just wrap the result of splitAt in Just and satify our types. But this leaves out a very important part: the final Nothing that tells unfoldr to stop! So our helper function needs an extra base case for [] to return the right result.
Putting it all together gives us the final function:
import Data.List (unfoldr)
paginate n = unfoldr go
where go [] = Nothing -- base case
go ls = Just (splitAt n ls)
I think the end result is pretty good, but I'm not sure it's any more readable than the recursive version.
If you don't mind enabling an extension (LambdaCase), you can rewrite this in a slightly neater way by avoiding naming go:
paginate n = unfoldr $ \case
[] -> Nothing
ls -> Just (splitAt n ls)
FYI, following from Tikhon Jelvis' answer, I eventually ended up with:
boolMaybe p f x = if p x then Just (f x) else Nothing
groupWith = unfoldr . boolMaybe null
paginate = groupWith . splitAt
which is a bit too obfuscated for me. Back to the recursive version.
Here is a slightly changed version of the Tikhon Jelvis' code:
import Data.Maybe
import Data.List
import Control.Applicative
paginate n = unfoldr $ \xs -> listToMaybe xs *> Just (splitAt n xs)
Or in pointfree:
paginate n = unfoldr $ (*>) <$> listToMaybe <*> Just . splitAt n
there's always been this:
import Data.List
chunks n = takeWhile (not . null) . unfoldr (Just . splitAt n)
but the new champion, for me, is
import Data.Maybe -- additional imports
import Control.Applicative
chunks n = unfoldr $ (<$) . splitAt n <*> listToMaybe
-- \xs -> splitAt n xs <$ listToMaybe xs
(tweaking just a little bit the code from this comment by #user3237465).
The bits used here are:
unfoldr :: (b -> Maybe (a, b)) -> b -> [a]
(f <*> g) x = f x (g x) -- S combinator
x <$ c = fmap (const x) c -- "mapped into"
listToMaybe xs = head (map Just xs ++ [Nothing])
Another, similar way to write it is
import Control.Monad -- an additional import
chunks n = unfoldr $ listToMaybe . join ((<$) . splitAt n)
-- \xs -> listToMaybe (splitAt n xs <$ xs)
using the monadic join for functions,
join f x = f x x -- W combinator

Resources