Haskell List of tuples to list? - haskell

Is it possible to convert a list of tuples [(Int,Int)] as a generic way which valid to any input size ? .. i saw in various questions thats its not possible generically
example :
type X = [(Int,Int)]
func :: X -> [Int]

Your question is not very certain about how the tuples should be converted into a list. I assume that you want to have them flattend - for instance, [(1,2),(3,4)] should become [1,2,3,4].
This translation is only possible, if the two elements of your tuple are of the same type. In this case you can do something like this:
tupleToList :: [(a,a)] -> [a]
tupleToList ((a,b):xs) = a : b : tupleToList xs
tupleToList _ = []
In the general case, such a translation is impossible. One thing I could imagine to make the impossible possible is to use Either to wrap up the two different types:
tupleToList :: [(a,b)] -> [Either a b]
tupleToList ((a,b):xs) = Left a : Right b : tupleToList xs

The lens library handles this and similar cases consistently.
> import Control.Lens
> toListOf (traverse . both) [(1,2),(3,4)]
^ ^
| |> Traversal of the tuple (a, a)
|> Traversal of a list [b]
[1,2,3,4]
To convert from a list of lists:
> toListOf (traverse . traverse) [[1,2],[3,4],[5,6,7]]
[1,2,3,4,5,6,7]
addition edit:
traverse works with Traversable
traverse will work with any datatype that has a Traversable instance, for example trees.
> import Data.Tree
> let t = Node 1 [Node 2 [Node 3 [], Node 4 []], Node 5 []]
> let prettyTree = drawTree . fmap show
> prettyTree t
1
|
+- 2
| |
| +- 3
| |
| `- 4
|
`- 5
> toListOf (traverse . traverse) [t, t]
[1,2,3,4,5,1,2,3,4,5]

You could also use a fold and avoid explicit recursion:
tupleToList = foldr (\(f,s) a -> f : s : a) []
Or:
tupleToList = foldl (\a (f,s) -> a ++ [f,s]) []
(For elements of the same type)

This can be also achieved by the homogeneous tuples library (disclaimer: which I'm the author of). It defines wrappers for tuples that make them instances of Traversable (and others such as Applicative and Monad). So a tuple can be converted to a list by toList . Tuple2 (where toList is from Data.Foldable) and
f :: [(a, a)] -> [a]
f = concatMap (toList . Tuple2)
You can also use it for other tuples, for example concatMap (toList . Tuple5) etc.

f [] = []
f [(x, y) : xs] = x : y : f xs

Related

Given a list, how can I perform some transformation only on sub-lists whose each two elements satisfy a binary predicate?

(In my actual use case I have a list of type [SomeType], SomeType having a finite number of constructors, all nullary; in the following I'll use String instead of [SomeType] and use only 4 Chars, to simplify a bit.)
I have a list like this "aaassddddfaaaffddsssadddssdffsdf" where each element can be one of 'a', 's', 'd', 'f', and I want to do some further processing on each contiguous sequence of non-as, let's say turning them upper case and reversing the sequence, thus obtaining "aaaFDDDDSSaaaSSSDDFFaFDSFFDSSDDD". (I've added the reversing requirement to make it clear that the processing involves all the contiguous non 'a'-s at the same time.)
To turn each sub-String upper case, I can use this:
func :: String -> String
func = reverse . map Data.Char.toUpper
But how do I run that func only on the sub-Strings of non-'a's?
My first thought is that Data.List.groupBy can be useful, and the overall solution could be:
concat $ map (\x -> if head x == 'a' then x else func x)
$ Data.List.groupBy ((==) `on` (== 'a')) "aaassddddfaaaffddsssadddssdffsdf"
This solution, however, does not convince me, as I'm using == 'a' both when grouping (which to me seems good and unavoidable) and when deciding whether I should turn a group upper case.
I'm looking for advices on how I can accomplish this small task in the best way.
You could classify the list elements by the predicate before grouping. Note that I’ve reversed the sense of the predicate to indicate which elements are subject to the transformation, rather than which elements are preserved.
{-# LANGUAGE ScopedTypeVariables #-}
import Control.Arrow ((&&&))
import Data.Function (on)
import Data.Monoid (First(..))
mapSegmentsWhere
:: forall a. (a -> Bool) -> ([a] -> [a]) -> [a] -> [a]
mapSegmentsWhere p f
= concatMap (applyMatching . sequenceA) -- [a]
. groupBy ((==) `on` fst) -- [[(First Bool, a)]]
. map (First . Just . p &&& id) -- [(First Bool, a)]
where
applyMatching :: (First Bool, [a]) -> [a]
applyMatching (First (Just matching), xs)
= applyIf matching f xs
applyIf :: forall a. Bool -> (a -> a) -> a -> a
applyIf condition f
| condition = f
| otherwise = id
Example use:
> mapSegmentsWhere (/= 'a') (reverse . map toUpper) "aaassddddfaaaffddsssadddssdffsdf"
"aaaFDDDDSSaaaSSSDDFFaFDSFFDSSDDD"
Here I use the First monoid with sequenceA to merge the lists of adjacent matching elements from [(Bool, a)] to (Bool, [a]), but you could just as well use something like map (fst . head &&& map snd). You can also skip the ScopedTypeVariables if you don’t want to write the type signatures; I just included them for clarity.
If we need to remember the difference between the 'a's and the rest, let's put them in different branches of an Either. In fact, let's define a newtype now that we are at it:
{-# LANGUAGE DeriveFoldable #-}
{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE ViewPatterns #-}
import Data.Bifoldable
import Data.Char
import Data.List
newtype Bunched a b = Bunched [Either a b] deriving (Functor, Foldable)
instance Bifunctor Bunched where
bimap f g (Bunched b) = Bunched (fmap (bimap f g) b)
instance Bifoldable Bunched where
bifoldMap f g (Bunched b) = mconcat (fmap (bifoldMap f g) b)
fmap will let us work over the non-separators. fold will return the concatenation of the non-separators, bifold will return the concatenation of everything. Of course, we could have defined separate functions unrelated to Foldable and Bifoldable, but why avoid already existing abstractions?
To split the list, we can use an unfoldr that alternately searches for as and non-as with the span function:
splitty :: Char -> String -> Bunched String String
splitty c str = Bunched $ unfoldr step (True, str)
where
step (_, []) = Nothing
step (True, span (== c) -> (as, ys)) = Just (Left as, (False, ys))
step (False, span (/= c) -> (xs, ys)) = Just (Right xs, (True, ys))
Putting it to work:
ghci> bifold . fmap func . splitty 'a' $ "aaassddddfaaaffddsssadddssdffsdf"
"aaaFDDDDSSaaaSSSDDFFaFDSFFDSSDDD"
Note: Bunched is actually the same as Tannen [] Either from the bifunctors package, if you don't mind the extra dependency.
There are other answers here, but I think they get too excited about iteration abstractions. A manual recursion, alternately taking things that match the predicate and things that don't, makes this problem exquisitely simple:
onRuns :: Monoid m => (a -> Bool) -> ([a] -> m) -> ([a] -> m) -> [a] -> m
onRuns p = go p (not . p) where
go _ _ _ _ [] = mempty
go p p' f f' xs = case span p xs of
(ts, rest) -> f ts `mappend` go p' p f' f rest
Try it out in ghci:
Data.Char> onRuns ('a'==) id (reverse . map toUpper) "aaassddddfaaaffddsssadddssdffsdf"
"aaaFDDDDSSaaaSSSDDFFaFDSFFDSSDDD"
Here is a simple solution - function process below - that only requires that you define two functions isSpecial and func. Given a constructor from your type SomeType, isSpecial determines whether it is one of those constructors that form a special sublist or not. The function func is the one you included in your question; it defines what should happen with the special sublists.
The code below is for character lists. Just change isSpecial and func to make it work for your lists of constructors.
isSpecial c = c /= 'a'
func = reverse . map toUpper
turn = map (\x -> ([x], isSpecial x))
amalgamate [] = []
amalgamate [x] = [x]
amalgamate ((xs, xflag) : (ys, yflag) : rest)
| xflag /= yflag = (xs, xflag) : amalgamate ((ys, yflag) : rest)
| otherwise = amalgamate ((xs++ys, xflag) : rest)
work = map (\(xs, flag) -> if flag then func xs else xs)
process = concat . work . amalgamate . turn
Let's try it on your example:
*Main> process "aaassddddfaaaffddsssadddssdffsdf"
"aaaFDDDDSSaaaSSSDDFFaFDSFFDSSDDD"
*Main>
Applying one function at a time, shows the intermediate steps taken:
*Main> turn "aaassddddfaaaffddsssadddssdffsdf"
[("a",False),("a",False),("a",False),("s",True),("s",True),("d",True),
("d",True),("d",True),("d",True),("f",True),("a",False),("a",False),
("a",False),("f",True),("f",True),("d",True),("d",True),("s",True),
("s",True),("s",True),("a",False),("d",True),("d",True),("d",True),
("s",True),("s",True),("d",True),("f",True),("f",True),("s",True),
("d",True),("f",True)]
*Main> amalgamate it
[("aaa",False),("ssddddf",True),("aaa",False),("ffddsss",True),
("a",False),("dddssdffsdf",True)]
*Main> work it
["aaa","FDDDDSS","aaa","SSSDDFF","a","FDSFFDSSDDD"]
*Main> concat it
"aaaFDDDDSSaaaSSSDDFFaFDSFFDSSDDD"
*Main>
We can just do what you describe, step by step, getting a clear simple minimal code which we can easily read and understand later on:
foo :: (a -> Bool) -> ([a] -> [a]) -> [a] -> [a]
foo p f xs = [ a
| g <- groupBy ((==) `on` fst)
[(p x, x) | x <- xs] -- [ (True, 'a'), ... ]
, let (t:_, as) = unzip g -- ( [True, ...], "aaa" )
, a <- if t then as else (f as) ] -- final concat
-- unzip :: [(b, a)] -> ([b], [a])
We break the list into same-p spans and unpack each group with the help of unzip. Trying it out:
> foo (=='a') reverse "aaabcdeaa"
"aaaedcbaa"
So no, using == 'a' is avoidable and hence not especially good, introducing an unnecessary constraint on your data type when all we need is equality on Booleans.

Removing duplicate elements in a Seq

wondering how to implement nub over a Seq a
I get that one could do:
nubSeq :: Seq a -> Seq a
nubSeq = fromList . nub . toList
Just wondering is there something standard that does not convert to Lists in order to call nub :: [a]->[a]?
An implementation that occurred to me, based obviously on nub, is:
nubSeq :: (Eq a) => Seq a -> Seq a
nubSeq = Data.Sequence.foldrWithIndex
(\_ x a -> case x `Data.Sequence.elemIndexR` a of
Just _ -> a
Nothing -> a |> x) Data.Sequence.empty
But there must be something more elegant?
thanks.
Not sure whether this qualifies as more elegant but it splits the concerns in independent functions (caveat: you need an Ord constraint on a):
seqToNubMap takes a Seq and outputs a Map associating to each a the smallest index at which it appeared in the sequence
mapToList takes a Map of values and positions and produces a list of values in increasing order according to the specified positions
nubSeq combines these to generate a sequence without duplicates
The whole thing should be O(n*log(n)), I believe:
module NubSeq where
import Data.Map as Map
import Data.List as List
import Data.Sequence as Seq
import Data.Function
seqToNubMap :: Ord a => Seq a -> Map a Int
seqToNubMap = foldlWithIndex (\ m k v -> insertWith min v k m) Map.empty
mapToList :: Ord a => Map a Int -> [a]
mapToList = fmap fst . List.sortBy (compare `on` snd) . Map.toList
nubSeq :: Ord a => Seq a -> Seq a
nubSeq = Seq.fromList . mapToList . seqToNubMap
Or a simpler alternative following #DavidFletcher's comment:
nubSeq' :: forall a. Ord a => Seq a -> Seq a
nubSeq' xs = Fold.foldr cons nil xs Set.empty where
cons :: a -> (Set a -> Seq a) -> (Set a -> Seq a)
cons x xs seen
| x `elem` seen = xs seen
| otherwise = x <| xs (Set.insert x seen)
nil :: Set a -> Seq a
nil _ = Seq.empty
Another way with an Ord constraint - use a scan to make the sets of
elements that appear in each prefix of the list. Then we can filter out
any element that's already been seen.
import Data.Sequence as Seq
import Data.Set as Set
nubSeq :: Ord a => Seq a -> Seq a
nubSeq xs = (fmap fst . Seq.filter (uncurry notElem)) (Seq.zip xs seens)
where
seens = Seq.scanl (flip Set.insert) Set.empty xs
Or roughly the same thing as a mapAccumL:
nubSeq' :: Ord a => Seq a -> Seq a
nubSeq' = fmap fst . Seq.filter snd . snd . mapAccumL f Set.empty
where
f s x = (Set.insert x s, (x, x `notElem` s))
(If I was using lists I would use Maybes instead of the pairs with
Bool, then use catMaybes instead of filtering. There doesn't seem to be catMaybes
for Sequence though.)
I think your code should be pretty efficient. Since Sequences are tree data structures using another tree type data structure like Map or HashMap to store and lookup the previous items doesn't make too much sense to me.
Instead i take the first item and check it's existence in the rest. If exists i drop that item and proceed the same with the rest recursively. If not then construct a new sequence with first element is the unique element and the rest is the result of nubSeq fed by the rest. Should be typical. I use ViewPatterns.
{-# LANGUAGE ViewPatterns #-}
import Data.Sequence as Seq
nubSeq :: Eq a => Seq a -> Seq a
nubSeq (viewl -> EmptyL) = empty
nubSeq (viewl -> (x :< xs)) | elemIndexL x xs == Nothing = x <| nubSeq xs
| otherwise = nubSeq xs
*Main> nubSeq . fromList $ [1,2,3,4,4,2,3,6,7,1,2,3,4]
fromList [6,7,1,2,3,4]

How to filter a list by another list in Haskell?

Suppose I have two lists A and B of the same length. I want to keep elements in A which are greater than corresponding elements in B. Let A=[1,5,8], B=[2,4,9], the result should be [5] because 1<2, 5>4, 8<9.
I come up with a solution. Let C=zip A B, then filter C, finally get result by taking fst of each element in C. It's not so elegant. Is there a simpler way?
Code:
map fst (filter (\ x-> (fst x) > (snd x)) (zip a b))
Your described solution looks fine to me.
An alternative which is not necessarily better:
import Data.Maybe
import Control.Monad
catMaybes $ zipWith (\a b -> guard (a>b) >> return a) list1 list2
According to the desugaring of monad comprehensions this should also work
{-# LANGUAGE MonadComprehensions #-}
[ a | ( a <- list1 | b <- list2 ), a > b ]
... but in practice it does not. It is a pity because I find it quite elegant.
I wonder whether I got it wrong or it is a GHC bug.
I was working on something similar and as a newbie this is the best I came up with:
filterGreaterThan xs ys = do (x,y) <- zip xs ys
guard (x > y)
return x
This solution is easier to reason about than the others. The do notation really shines here.
I'm not sure how your code looks but the following function look quite elegant to me:
greater :: Ord a => [a] -> [a] -> [a]
greater xs = map fst . filter ((>) <$> fst <*> snd) . zip xs
example :: [Int]
example = greater [1,5,8] [2,4,9] -- result is [5]
This pattern is well known in the Lisp community as the decorate-process-undecorate pattern.
A recursive approach, not so elegant as (any) of the other approaches, this relies on no explicit zipping and we get the result in one pass,
greater :: Ord a => [a] -> [a] -> [a]
greater [] [] = []
greater (x:xs) (y:ys)
| x > y = x : greater xs ys
| otherwise = greater xs ys
If you want to generalize this idea nicely, I would recommend looking to mapMaybe:
mapMaybe
:: (a -> Maybe b)
-> [a] -> [b]
Applying that idea to zipWith yields
zipWithMaybe
:: (a -> b -> Maybe c)
-> [a] -> [b] -> [c]
zipWithMaybe f xs ys =
[c | Just c <- zipWith f xs ys]
Now you can write your function
keepGreater :: Ord a => [a] -> [a] -> [a]
keepGreater = zipWithMaybe $
\x y -> x <$ guard (x > y)
Is it really worth the trouble? For lists, probably not. But something like this turns out to be useful in the context of merges for Data.Map.
Pretty similar to #chi's solution with Lists concant:
concat $ zipWith (\a b -> last $ []:[[a] | a > b]) as bs

What to use instead of explicit recursion in Haskell?

Write a function that doubles other number beginning with the 2nd number from the right:
Example:
doubleEveryOther [8,7,6,5]
=> [16,7,12,5]
doubleEveryOther [1,2,3]
=> [1,4,3]
O(n) solution:
doubleEveryOther :: Num a => [a] -> [a]
doubleEveryOther xs0 =
let (_,r) = deo xs0
deo xs1 = case xs1 of
[] -> (False, [])
(x:xs) -> let (b, xs') = deo xs in ((not b), (if b then 2*x else x) : xs')
in r
The use above of explicit recursion is generally considered poor Haskell style (e.g., use fold*, scan, etc where possible).
QUESTIONS
what Haskell library functions cover the above case?
what would be a more concise/idiomatic Haskell solution that is still O(n)?
is there a name for the above type of recursion (where we use the value from a deeper recursion to make a decision the next level up)?
You can use foldr to do this kind of recursion from the right:
doubleEveryOther = snd . foldr go (False, [])
where go x (b, xs) = (not b, (if b then 2*x else x) : xs)
Another way to define this function by using standard library functions:
doubleEveryOther ls = reverse $ zipWith (*) (cycle [1,2]) (reverse ls)
Or in pointfree style
doubleEveryOther = reverse . zipWith (*) (cycle [1,2]) . reverse
Lots of useful answers here, but no one yet mentioned the rarely seen function mapAccumR from Data.List which fits this particular use case almost perfectly:
doubleEveryOther :: Num a => [a] -> [a]
doubleEveryOther = snd . mapAccumR step False
where
step False x = (True, x)
step True x = (False, 2*x)
As to question 1 and 2, with lens you can define the function in a declarative manner:
import Control.Lens
doubleEveryOther :: Num a => [a] -> [a]
doubleEveryOther = reversed . traversed . indices odd *~ 2
Operationally, this involves reversing the list, then modifying, then reversing again, but of course it's still O(N) with any constant number of reversals.
An alternative is to use the
lens package.
This allows you to avoid explicit recursion and remain very flexible on what data structures you can operate on.
You can use the elements traversal. It takes a Int -> Bool function to decide what indices to act on.
Double even indices or odd indices.
> over (elements even) (*2) [8,7,6,5]
[16,7,12,5]
> over (elements odd) (*2) [8,7,6,5]
[8,14,6,10]
Or double every third element:
> over (elements (\n -> mod n 3 == 0)) (*2) [8,7,6,5]
[16,7,6,10]
Not just lists
This technique will work for any datatype that has a Traversable instance.
For example take the standard tree datatype for the
containers.
> import Data.Tree
> let tree = Node 1 [Node 2 [Node 3 [], Node 4 []], Node 5 [Node 6 []]]
> let prettyTree = putStrLn . drawTree . fmap show
> prettyTree tree
1
|
+- 2
| |
| +- 3
| |
| `- 4
|
`- 5
|
`- 6
> prettyTree $ over (elements even) (*2) tree
2 -- 1
| -- |
+- 2 -- +- 2
| | -- | |
| +- 6 -- | +- 3
| | -- | |
| `- 4 -- | `- 4
| -- |
`- 10 -- `- 5
| -- |
`- 6 -- `- 6
Your questions.
The lens package has a number of functions that help with handling recursion with out being explicit.
The lens is concise, though some do not yet considered it idiomatic. I have not tested the bigO of the above functions. My understanding is that it will depend on the bigO of the traversable instance for the datatype you are using.
The list instance in the Traversable module looks straightforward and should meet your expectations.:
instance Traversable [] where
{-# INLINE traverse #-} -- so that traverse can fuse
traverse f = Prelude.foldr cons_f (pure [])
where cons_f x ys = (:) <$> f x <*> ys
I am not sure what you are asking for here.
You can use map as well:
Prelude> let f ns = map (\(a,b) -> if (even (length ns) && even b) || (odd (length ns) && odd b) then a else a * 2) $ zip ns [1..]
Prelude> f [8,7,6,5]
[16,7,12,5]
Prelude> f [8,7,6]
[8,14,6]
My solution using mutual recursions
doubleEveryOther :: [Integer] -> [Integer]
doubleEveryOther xs
| even n = doubleOdd xs
| otherwise = doubleEven xs
where n = length xs
-- | use mutual recursion
doubleEven :: Num a => [a] -> [a]
doubleEven (x:xs) = x : doubleOdd xs
doubleEven [] = []
doubleOdd :: Num a => [a] -> [a]
doubleOdd (x:xs) = (2*x) : doubleEven xs
doubleOdd [] = []
For the sake of completeness, here is your solution encoded as a recursion-schemes zygomorphism, as anticipated by András Kovács's remark:
{-# LANGUAGE LambdaCase #-}
import Data.Functor.Foldable
doubleEveryOther :: Num a => [a] -> [a]
doubleEveryOther = zygo flagAlg emitAlg
where
flagAlg = \case
Nil -> False
Cons _ b -> not b
emitAlg = \case
Nil -> []
Cons x (b, xs) -> (if b then 2*x else x) : xs

Recursively sort non-contiguous list to list of contiguous lists

I've been trying to learn a bit of functional programming (with Haskell & Erlang) lately and I'm always amazed at the succinct solutions people can come up with when they can think recursively and know the tools.
I want a function to convert a list of sorted, unique, non-contiguous integers into a list of contiguous lists, i.e:
[1,2,3,6,7,8,10,11]
to:
[[1,2,3], [6,7,8], [10,11]
This was the best I could come up with in Haskell (two functions)::
make_ranges :: [[Int]] -> [Int] -> [[Int]]
make_ranges ranges [] = ranges
make_ranges [] (x:xs)
| null xs = [[x]]
| otherwise = make_ranges [[x]] xs
make_ranges ranges (x:xs)
| (last (last ranges)) + 1 == x =
make_ranges ((init ranges) ++ [(last ranges ++ [x])]) xs
| otherwise = make_ranges (ranges ++ [[x]]) xs
rangify :: [Int] -> [[Int]]
rangify lst = make_ranges [] lst
It might be a bit subjective but I'd be interested to see a better, more elegant, solution to this in either Erlang or Haskell (other functional languages too but I might not understand it.) Otherwise, points for just fixing my crappy beginner's Haskell style!
Most straightforward way in my mind is a foldr:
ranges = foldr step []
where step x [] = [[x]]
step x acc#((y:ys):zs) | y == x + 1 = (x:y:ys):zs
| otherwise = [x]:acc
Or, more concisely:
ranges = foldr step []
where step x ((y:ys):zs) | y == x + 1 = (x:y:ys):zs
step x acc = [x]:acc
But wait, there's more!
abstractRanges f = foldr step []
where step x ((y:ys):zs) | f x y = (x:y:ys):zs
step x acc = [x]:acc
ranges = abstractRanges (\x y -> y == x + 1)
powerRanges = abstractRanges (\x y -> y == x*x) -- mighty morphin
By turning the guard function into a parameter, you can group more interesting things than just +1 sequences.
*Main> powerRanges [1,1,1,2,4,16,3,9,81,5,25]
[[1,1,1],[2,4,16],[3,9,81],[5,25]]
The utility of this particular function is questionable...but fun!
I can't believe I got the shortest solution. I know this is no code golf, but I think it is still quite readable:
import GHC.Exts
range xs = map (map fst) $ groupWith snd $ zipWith (\a b -> (a, a-b)) xs [0..]
or pointfree
range = map (map snd) . groupWith fst . zipWith (\a b -> (b-a, b)) [0..]
BTW, groupWith snd can be replaced with groupBy (\a b -> snd a == snd b) if you prefer Data.List over GHC.Exts
[Edit]
BTW: Is there a nicer way to get rid of the lambda (\a b -> (b-a, b)) than (curry $ (,) <$> ((-) <$> snd <*> fst) <*> snd) ?
[Edit 2]
Yeah, I forgot (,) is a functor. So here is the obfuscated version:
range = map (map fst) . groupWith snd . (flip $ zipWith $ curry $ fmap <$> (-).fst <*> id) [0..]
Suggestions are welcome...
import Data.List (groupBy)
ranges xs = (map.map) snd
. groupBy (const fst)
. zip (True : zipWith ((==) . succ) xs (tail xs))
$ xs
As to how to come up with such a thing: I started with the zipWith f xs (tail xs), which is a common idiom when you want to do something on consecutive elements of a list. Likewise is zipping up a list with information about the list, and then acting (groupBy) upon it. The rest is plumbing.
Then, of course, you can feed it through #pl and get:
import Data.List (groupBy)
import Control.Monad (ap)
import Control.Monad.Instances()
ranges = (((map.map) snd)
. groupBy (const fst))
.) =<< zip
. (True:)
. ((zipWith ((==) . succ)) `ap` tail)
, which, by my authoritative definition, is evil due to Mondad ((->) a). Twice, even. The data flow is meandering too much to lay it out in any sensible way. zipaptail is an Aztec god, and Aztec gods aren't to be messed with.
Another version in Erlang:
part(List) -> part(List,[]).
part([H1,H2|T],Acc) when H1 =:= H2 - 1 ->
part([H2|T],[H1|Acc]);
part([H1|T],Acc) ->
[lists:reverse([H1|Acc]) | part(T,[])];
part([],Acc) -> Acc.
k z = map (fst <$>) . groupBy (const snd) .
zip z . (False:) . (zipWith ((==) . succ) <*> tail) $ z
Try reusing standard functions.
import Data.List (groupBy)
rangeify :: (Num a) => [a] -> [[a]]
rangeify l = map (map fst) $ groupBy (const snd) $ zip l contigPoints
where contigPoints = False : zipWith (==) (map (+1) l) (drop 1 l)
Or, following (mixed) advice to use unfoldr, stop abusing groupBy, and be happy using partial functions when it doesn't matter:
import Control.Arrow ((***))
import Data.List (unfoldr)
spanContig :: (Num a) => [a] -> [[a]]
spanContig l =
map fst *** map fst $ span (\(a, b) -> a == b + 1) $ zip l (head l - 1 : l)
rangeify :: (Num a) => [a] -> [[a]]
rangeify = unfoldr $ \l -> if null l then Nothing else Just $ spanContig l
Erlang using foldr:
ranges(List) ->
lists:foldr(fun (X, [[Y | Ys], Acc]) when Y == X + 1 ->
[[X, Y | Ys], Acc];
(X, Acc) ->
[[X] | Acc]
end, [], List).
This is my v0.1 and I can probably make it better:
makeCont :: [Int] -> [[Int]]
makeCont [] = []
makeCont [a] = [[a]]
makeCont (a:b:xs) = if b - a == 1
then (a : head next) : tail next
else [a] : next
where
next :: [[Int]]
next = makeCont (b:xs)
And I will try and make it better. Edits coming I think.
As a comparison, here's an implementation in Erlang:
partition(L) -> [lists:reverse(T) || T <- lists:reverse(partition(L, {[], []}))].
partition([E|L], {R, [EL|_] = T}) when E == EL + 1 -> partition(L, {R, [E|T]});
partition([E|L], {R, []}) -> partition(L, {R, [E]});
partition([E|L], {R, T}) -> partition(L, {[T|R], [E]});
partition([], {R, []}) -> R;
partition([], {R, T}) -> [T|R].
The standard paramorphism recursion scheme isn't in Haskell's Data.List module, though I think it should be. Here's a solution using a paramorphism, because you are building a list-of-lists from a list, the cons-ing is a little tricksy:
contig :: (Eq a, Num a) => [a] -> [[a]]
contig = para phi [] where
phi x ((y:_),(a:acc)) | x + 1 == y = (x:a):acc
phi x (_, acc) = [x]:acc
Paramorphism is general recursion or a fold with lookahead:
para :: (a -> ([a], b) -> b) -> b -> [a] -> b
para phi b [] = b
para phi b (x:xs) = phi x (xs, para phi b xs)
It can be pretty clear and simple in the Erlang:
partition([]) -> [];
partition([A|T]) -> partition(T, [A]).
partition([A|T], [B|_]=R) when A =:= B+1 -> partition(T, [A|R]);
partition(L, P) -> [lists:reverse(P)|partition(L)].
Edit: Just for curiosity I have compared mine and Lukas's version and mine seems about 10% faster either in native either in bytecode version on testing set what I generated by lists:usort([random:uniform(1000000)||_<-lists:seq(1,1000000)]) on R14B01 64b version at mine notebook. (Testing set is 669462 long and has been partitioned to 232451 sublists.)
Edit2: Another test data lists:usort([random:uniform(1000000)||_<-lists:seq(1,10000000)]), length 999963 and 38 partitions makes bigger diference in native code. Mine version finish in less than half of time. Bytecode version is only about 20% faster.
Edit3: Some microoptimizations which provides additional performance but leads to more ugly and less maintainable code:
part4([]) -> [];
part4([A|T]) -> part4(T, A, []).
part4([A|T], B, R) when A =:= B+1 -> part4(T, A, [B|R]);
part4([A|T], B, []) -> [[B]|part4(T, A, [])];
part4([A|T], B, R) -> [lists:reverse(R, [B])|part4(T, A, [])];
part4([], B, R) -> [lists:reverse(R,[B])].
Here's an attempt from a haskell noob
ranges ls = let (a, r) = foldl (\(r, a#(h:t)) e -> if h + 1 == e then (r, e:a) else (a:r, [e])) ([], [head ls]) (tail ls)
in reverse . map reverse $ r : a

Resources