Removing duplicate elements in a Seq - haskell

wondering how to implement nub over a Seq a
I get that one could do:
nubSeq :: Seq a -> Seq a
nubSeq = fromList . nub . toList
Just wondering is there something standard that does not convert to Lists in order to call nub :: [a]->[a]?
An implementation that occurred to me, based obviously on nub, is:
nubSeq :: (Eq a) => Seq a -> Seq a
nubSeq = Data.Sequence.foldrWithIndex
(\_ x a -> case x `Data.Sequence.elemIndexR` a of
Just _ -> a
Nothing -> a |> x) Data.Sequence.empty
But there must be something more elegant?
thanks.

Not sure whether this qualifies as more elegant but it splits the concerns in independent functions (caveat: you need an Ord constraint on a):
seqToNubMap takes a Seq and outputs a Map associating to each a the smallest index at which it appeared in the sequence
mapToList takes a Map of values and positions and produces a list of values in increasing order according to the specified positions
nubSeq combines these to generate a sequence without duplicates
The whole thing should be O(n*log(n)), I believe:
module NubSeq where
import Data.Map as Map
import Data.List as List
import Data.Sequence as Seq
import Data.Function
seqToNubMap :: Ord a => Seq a -> Map a Int
seqToNubMap = foldlWithIndex (\ m k v -> insertWith min v k m) Map.empty
mapToList :: Ord a => Map a Int -> [a]
mapToList = fmap fst . List.sortBy (compare `on` snd) . Map.toList
nubSeq :: Ord a => Seq a -> Seq a
nubSeq = Seq.fromList . mapToList . seqToNubMap
Or a simpler alternative following #DavidFletcher's comment:
nubSeq' :: forall a. Ord a => Seq a -> Seq a
nubSeq' xs = Fold.foldr cons nil xs Set.empty where
cons :: a -> (Set a -> Seq a) -> (Set a -> Seq a)
cons x xs seen
| x `elem` seen = xs seen
| otherwise = x <| xs (Set.insert x seen)
nil :: Set a -> Seq a
nil _ = Seq.empty

Another way with an Ord constraint - use a scan to make the sets of
elements that appear in each prefix of the list. Then we can filter out
any element that's already been seen.
import Data.Sequence as Seq
import Data.Set as Set
nubSeq :: Ord a => Seq a -> Seq a
nubSeq xs = (fmap fst . Seq.filter (uncurry notElem)) (Seq.zip xs seens)
where
seens = Seq.scanl (flip Set.insert) Set.empty xs
Or roughly the same thing as a mapAccumL:
nubSeq' :: Ord a => Seq a -> Seq a
nubSeq' = fmap fst . Seq.filter snd . snd . mapAccumL f Set.empty
where
f s x = (Set.insert x s, (x, x `notElem` s))
(If I was using lists I would use Maybes instead of the pairs with
Bool, then use catMaybes instead of filtering. There doesn't seem to be catMaybes
for Sequence though.)

I think your code should be pretty efficient. Since Sequences are tree data structures using another tree type data structure like Map or HashMap to store and lookup the previous items doesn't make too much sense to me.
Instead i take the first item and check it's existence in the rest. If exists i drop that item and proceed the same with the rest recursively. If not then construct a new sequence with first element is the unique element and the rest is the result of nubSeq fed by the rest. Should be typical. I use ViewPatterns.
{-# LANGUAGE ViewPatterns #-}
import Data.Sequence as Seq
nubSeq :: Eq a => Seq a -> Seq a
nubSeq (viewl -> EmptyL) = empty
nubSeq (viewl -> (x :< xs)) | elemIndexL x xs == Nothing = x <| nubSeq xs
| otherwise = nubSeq xs
*Main> nubSeq . fromList $ [1,2,3,4,4,2,3,6,7,1,2,3,4]
fromList [6,7,1,2,3,4]

Related

Given a list, how can I perform some transformation only on sub-lists whose each two elements satisfy a binary predicate?

(In my actual use case I have a list of type [SomeType], SomeType having a finite number of constructors, all nullary; in the following I'll use String instead of [SomeType] and use only 4 Chars, to simplify a bit.)
I have a list like this "aaassddddfaaaffddsssadddssdffsdf" where each element can be one of 'a', 's', 'd', 'f', and I want to do some further processing on each contiguous sequence of non-as, let's say turning them upper case and reversing the sequence, thus obtaining "aaaFDDDDSSaaaSSSDDFFaFDSFFDSSDDD". (I've added the reversing requirement to make it clear that the processing involves all the contiguous non 'a'-s at the same time.)
To turn each sub-String upper case, I can use this:
func :: String -> String
func = reverse . map Data.Char.toUpper
But how do I run that func only on the sub-Strings of non-'a's?
My first thought is that Data.List.groupBy can be useful, and the overall solution could be:
concat $ map (\x -> if head x == 'a' then x else func x)
$ Data.List.groupBy ((==) `on` (== 'a')) "aaassddddfaaaffddsssadddssdffsdf"
This solution, however, does not convince me, as I'm using == 'a' both when grouping (which to me seems good and unavoidable) and when deciding whether I should turn a group upper case.
I'm looking for advices on how I can accomplish this small task in the best way.
You could classify the list elements by the predicate before grouping. Note that I’ve reversed the sense of the predicate to indicate which elements are subject to the transformation, rather than which elements are preserved.
{-# LANGUAGE ScopedTypeVariables #-}
import Control.Arrow ((&&&))
import Data.Function (on)
import Data.Monoid (First(..))
mapSegmentsWhere
:: forall a. (a -> Bool) -> ([a] -> [a]) -> [a] -> [a]
mapSegmentsWhere p f
= concatMap (applyMatching . sequenceA) -- [a]
. groupBy ((==) `on` fst) -- [[(First Bool, a)]]
. map (First . Just . p &&& id) -- [(First Bool, a)]
where
applyMatching :: (First Bool, [a]) -> [a]
applyMatching (First (Just matching), xs)
= applyIf matching f xs
applyIf :: forall a. Bool -> (a -> a) -> a -> a
applyIf condition f
| condition = f
| otherwise = id
Example use:
> mapSegmentsWhere (/= 'a') (reverse . map toUpper) "aaassddddfaaaffddsssadddssdffsdf"
"aaaFDDDDSSaaaSSSDDFFaFDSFFDSSDDD"
Here I use the First monoid with sequenceA to merge the lists of adjacent matching elements from [(Bool, a)] to (Bool, [a]), but you could just as well use something like map (fst . head &&& map snd). You can also skip the ScopedTypeVariables if you don’t want to write the type signatures; I just included them for clarity.
If we need to remember the difference between the 'a's and the rest, let's put them in different branches of an Either. In fact, let's define a newtype now that we are at it:
{-# LANGUAGE DeriveFoldable #-}
{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE ViewPatterns #-}
import Data.Bifoldable
import Data.Char
import Data.List
newtype Bunched a b = Bunched [Either a b] deriving (Functor, Foldable)
instance Bifunctor Bunched where
bimap f g (Bunched b) = Bunched (fmap (bimap f g) b)
instance Bifoldable Bunched where
bifoldMap f g (Bunched b) = mconcat (fmap (bifoldMap f g) b)
fmap will let us work over the non-separators. fold will return the concatenation of the non-separators, bifold will return the concatenation of everything. Of course, we could have defined separate functions unrelated to Foldable and Bifoldable, but why avoid already existing abstractions?
To split the list, we can use an unfoldr that alternately searches for as and non-as with the span function:
splitty :: Char -> String -> Bunched String String
splitty c str = Bunched $ unfoldr step (True, str)
where
step (_, []) = Nothing
step (True, span (== c) -> (as, ys)) = Just (Left as, (False, ys))
step (False, span (/= c) -> (xs, ys)) = Just (Right xs, (True, ys))
Putting it to work:
ghci> bifold . fmap func . splitty 'a' $ "aaassddddfaaaffddsssadddssdffsdf"
"aaaFDDDDSSaaaSSSDDFFaFDSFFDSSDDD"
Note: Bunched is actually the same as Tannen [] Either from the bifunctors package, if you don't mind the extra dependency.
There are other answers here, but I think they get too excited about iteration abstractions. A manual recursion, alternately taking things that match the predicate and things that don't, makes this problem exquisitely simple:
onRuns :: Monoid m => (a -> Bool) -> ([a] -> m) -> ([a] -> m) -> [a] -> m
onRuns p = go p (not . p) where
go _ _ _ _ [] = mempty
go p p' f f' xs = case span p xs of
(ts, rest) -> f ts `mappend` go p' p f' f rest
Try it out in ghci:
Data.Char> onRuns ('a'==) id (reverse . map toUpper) "aaassddddfaaaffddsssadddssdffsdf"
"aaaFDDDDSSaaaSSSDDFFaFDSFFDSSDDD"
Here is a simple solution - function process below - that only requires that you define two functions isSpecial and func. Given a constructor from your type SomeType, isSpecial determines whether it is one of those constructors that form a special sublist or not. The function func is the one you included in your question; it defines what should happen with the special sublists.
The code below is for character lists. Just change isSpecial and func to make it work for your lists of constructors.
isSpecial c = c /= 'a'
func = reverse . map toUpper
turn = map (\x -> ([x], isSpecial x))
amalgamate [] = []
amalgamate [x] = [x]
amalgamate ((xs, xflag) : (ys, yflag) : rest)
| xflag /= yflag = (xs, xflag) : amalgamate ((ys, yflag) : rest)
| otherwise = amalgamate ((xs++ys, xflag) : rest)
work = map (\(xs, flag) -> if flag then func xs else xs)
process = concat . work . amalgamate . turn
Let's try it on your example:
*Main> process "aaassddddfaaaffddsssadddssdffsdf"
"aaaFDDDDSSaaaSSSDDFFaFDSFFDSSDDD"
*Main>
Applying one function at a time, shows the intermediate steps taken:
*Main> turn "aaassddddfaaaffddsssadddssdffsdf"
[("a",False),("a",False),("a",False),("s",True),("s",True),("d",True),
("d",True),("d",True),("d",True),("f",True),("a",False),("a",False),
("a",False),("f",True),("f",True),("d",True),("d",True),("s",True),
("s",True),("s",True),("a",False),("d",True),("d",True),("d",True),
("s",True),("s",True),("d",True),("f",True),("f",True),("s",True),
("d",True),("f",True)]
*Main> amalgamate it
[("aaa",False),("ssddddf",True),("aaa",False),("ffddsss",True),
("a",False),("dddssdffsdf",True)]
*Main> work it
["aaa","FDDDDSS","aaa","SSSDDFF","a","FDSFFDSSDDD"]
*Main> concat it
"aaaFDDDDSSaaaSSSDDFFaFDSFFDSSDDD"
*Main>
We can just do what you describe, step by step, getting a clear simple minimal code which we can easily read and understand later on:
foo :: (a -> Bool) -> ([a] -> [a]) -> [a] -> [a]
foo p f xs = [ a
| g <- groupBy ((==) `on` fst)
[(p x, x) | x <- xs] -- [ (True, 'a'), ... ]
, let (t:_, as) = unzip g -- ( [True, ...], "aaa" )
, a <- if t then as else (f as) ] -- final concat
-- unzip :: [(b, a)] -> ([b], [a])
We break the list into same-p spans and unpack each group with the help of unzip. Trying it out:
> foo (=='a') reverse "aaabcdeaa"
"aaaedcbaa"
So no, using == 'a' is avoidable and hence not especially good, introducing an unnecessary constraint on your data type when all we need is equality on Booleans.

Search in the list of integers, one of the longest ordered subsets (not necessarily consecutive) ordered by growth

Function, which finds in the list of integers one of the longest ordered increments of subscripts (not necessarily consecutive) numbers. Example:
• Sequence [21,27,15,14,18,16,14,17,22,13] = [14,16,17,22]
I have a problem with the function which takes the initial number from the array, and looks for a sequence:
fstLen:: Int -> [Int] -> [Int]
fstLen a [] = a: []
fstLen x (l:ls) = if x < l then x:(fstLen l ls) else fstLen x ls
I have problems in place, 14,18,16,14,17,22,13
14 < 18 but then 18 > 16 and my algorithm takes the number 16 as the basis and is looking for a new sequence and I need to go back to 14
How can I do it?
(sorry for my english)
You could always just use subsequences from Data.List to get all the possible subsequences in a list. When you get these subsequences, just take the sorted ones with this function and filter:
isSorted :: (Ord a) => [a] -> Bool
isSorted [] = True
isSorted [_] = True
isSorted(x:y:xs) = x <= y && isSorted (y:xs)
Then get the maximum length subsequence with maximumBy(or another method), with the ordering being comparinglength.
Here is what the code could look like:
import Data.Ord (comparing)
import Data.List (subsequences, maximumBy, nub)
isSorted :: (Ord a) => [a] -> Bool
isSorted [] = True
isSorted [_] = True
isSorted(x:y:xs) = x <= y && isSorted (y:xs)
max_sequence :: (Ord a) => [a] -> [a]
max_sequence xs = maximumBy (comparing length) $ map nub $ filter isSorted (subsequences xs)
Which seems to work correctly:
*Main> max_sequence [21,27,15,14,18,16,14,17,22,13]
[14,16,17,22]
Note: used map nub to remove duplicate elements from the sub sequences. If this is not used, then this will return [14,14,17,22] as the maximum sub sequence, which may be fine if you allow this.
A more efficient n log n solution can be done by maintaining a map where
keys are the first element of an increasing sequence.
values are a tuple: (length of the sequence, the actual sequence)
and the map maintains the invariance that for each possible size of an increasing sequence, only the lexicographically largest one is retained.
Extra traceShow bellow to demonstrate how the map changes while folding from the end of the list:
import Debug.Trace (traceShow)
import Data.Map (empty, elems, insert, delete, lookupGT, lookupLT)
-- longest (strictly) increasing sequence
lis :: (Ord k, Show k, Foldable t) => t k -> [k]
lis = snd . maximum . elems . foldr go empty
where
go x m = traceShow m $ case x `lookupLT` m of
Nothing -> m'
Just (k, v) -> if fst a < fst v then m' else k `delete` m'
where
a = case x `lookupGT` m of
Nothing -> (1, [x])
Just (_, (i, r)) -> (i + 1, x:r)
m' = insert x a m
then:
\> lis [21,27,15,14,18,16,14,17,22,13]
fromList []
fromList [(13,(1,[13]))]
fromList [(22,(1,[22]))]
fromList [(17,(2,[17,22])),(22,(1,[22]))]
fromList [(14,(3,[14,17,22])),(17,(2,[17,22])),(22,(1,[22]))]
fromList [(16,(3,[16,17,22])),(17,(2,[17,22])),(22,(1,[22]))]
fromList [(16,(3,[16,17,22])),(18,(2,[18,22])),(22,(1,[22]))]
fromList [(14,(4,[14,16,17,22])),(16,(3,[16,17,22])),(18,(2,[18,22])),(22,(1,[22]))]
fromList [(15,(4,[15,16,17,22])),(16,(3,[16,17,22])),(18,(2,[18,22])),(22,(1,[22]))]
fromList [(15,(4,[15,16,17,22])),(16,(3,[16,17,22])),(18,(2,[18,22])),(27,(1,[27]))]
[15,16,17,22]
It is not necessary to retain the lists within the map. One can reconstruct the longest increasing sequence only using the keys and the length of the sequences (i.e. only the first element of the tuples).
Excellent question! Looking forward to a variety of answers.
Still improving my answer. The answer below folds to build increasing subsequences from the right. It also uses the the list monad to prepend new elements to subsequences if the new element is smaller than the head of the subsequence. (This is my first real application of the list monad.) For example,
λ> [[3], [1]] >>= (prepIfSmaller 2)
[[2,3],[3],[1]]
This solution is about as short as I can make it.
import Data.List (maximumBy)
maxSubsequence :: Ord a => [a] -> [a]
maxSubsequence [] = []
maxSubsequence xs = takeLongest $ go [] xs
where
takeLongest :: Ord a => [[a]] -> [a]
takeLongest = maximumBy (\ x y -> compare (length x) (length y))
go :: Ord a => [[a]] -> [a] -> [[a]]
go = foldr (\x subs -> [x] : (subs >>= (prepIfSmaller x)))
where prepIfSmaller x s#(h:_) = (if x < h then [x:s] else []) ++ [s]
Quick test.
λ> maxSubsequence [21,27,15,14,18,16,14,17,22,13]
[15,16,17,22]

Haskell: change all indices from a list to some value

If I am given a list of objects and another list for some indices from this list, is there an easy way to change every object in this list with an index from the list of indices to a different value?
E.g. I am hoping there exists some function f such that
f 0 [4,2,5] [6,5,8,4,3,6,2,7]
would output
[6,5,0,4,0,0,2,7]
Here is a beautiful version that uses lens:
import Control.Lens
f :: a -> [Int] -> [a] -> [a]
f x is = elements (`elem` is) .~ x
Here is an efficient version that doesn't have any dependencies other than base. Basically, we start by sorting (and removing duplicates from the) indices list. That way, we don't need to scan the whole list for every replacement.
import Data.List
f :: a -> [Int] -> [a] -> [a]
f x is xs = snd $ mapAccumR go is' (zip xs [1..])
where
is' = map head . group . sort $ is
go [] (y,_) = ([],y)
go (i:is) (y,j) = if i == j then (is,x) else (i:is,y)
You can define a helper function to replace a single value and then use it to fold over your list.
replaceAll :: a -> [Int] -> [a] -> [a]
replaceAll repVal indices values = foldl (replaceValue repVal) values indices
where replaceValue val vals index = (take index vals) ++ [val] ++ (drop (index + 1) vals)
Sort the indices first. Then you can traverse the two lists in tandem.
{-# LANGUAGE ScopedTypeVariables #-}
import Prelude (Eq, Enum, Num, Ord, snd, (==), (<$>))
import Data.List (head, group, sort, zip)
f :: forall a. (Eq a, Enum a, Num a, Ord a) => a -> [a] -> [a] -> [a]
f replacement indices values =
go (head <$> group (sort indices)) (zip [0..] values)
where
go :: [a] -> [(a, a)] -> [a]
go [] vs = snd <$> vs
go _ [] = []
go (i:is) ((i', v):vs) | i == i' = replacement : go is vs
go is (v:vs) = snd v : go is vs
The sorting incurs an extra log factor on the length of the index list, but the rest is linear.

Nondeterminism for infinite inputs

Using lists to model nondeterminism is problematic if the inputs can take infinitely many values. For example
pairs = [ (a,b) | a <- [0..], b <- [0..] ]
This will return [(0,1),(0,2),(0,3),...] and never get around to showing you any pair whose first element is not 0.
Using the Cantor pairing function to collapse a list of lists into a single list can get around this problem. For example, we can define a bind-like operator that orders its outputs more intelligently by
(>>>=) :: [a] -> (a -> [b]) -> [b]
as >>>= f = cantor (map f as)
cantor :: [[a]] -> [a]
cantor xs = go 1 xs
where
go _ [] = []
go n xs = hs ++ go (n+1) ts
where
ys = filter (not.null) xs
hs = take n $ map head ys
ts = mapN n tail ys
mapN :: Int -> (a -> a) -> [a] -> [a]
mapN _ _ [] = []
mapN n f xs#(h:t)
| n <= 0 = xs
| otherwise = f h : mapN (n-1) f t
If we now wrap this up as a monad, we can enumerate all possible pairs
newtype Select a = Select { runSelect :: [a] }
instance Monad Select where
return a = Select [a]
Select as >>= f = Select $ as >>>= (runSelect . f)
pairs = runSelect $ do
a <- Select [0..]
b <- Select [0..]
return (a,b)
This results in
>> take 15 pairs
[(0,0),(0,1),(1,0),(0,2),(1,1),(2,0),(0,3),(1,2),(2,1),(3,0),(0,4),(1,3),(2,2),(3,1),(4,0)]
which is a much more desirable result. However, if we were to ask for triples instead, the ordering on the outputs isn't as "nice" and it's not even clear to me that all outputs are eventually included --
>> take 15 triples
[(0,0,0),(0,0,1),(1,0,0),(0,1,0),(1,0,1),(2,0,0),(0,0,2),(1,1,0),(2,0,1),(3,0,0),(0,1,1),(1,0,2),(2,1,0),(3,0,1),(4,0,0)]
Note that (2,0,1) appears before (0,1,1) in the ordering -- my intuition says that a good solution to this problem will order the outputs according to some notion of "size", which could be an explicit input to the algorithm, or could be given implicitly (as in this example, where the "size" of an input is its position in the input lists). When combining inputs, the "size" of a combination should be some function (probably the sum) of the size of the inputs.
Is there an elegant solution to this problem that I am missing?
TL;DR: It flattens two dimensions at a time, rather than flattening three at once. You can't tidy this up in the monad because >>= is binary, not ternary etc.
I'll assume you defined
(>>>=) :: [a] -> (a -> [b]) -> [b]
as >>>= f = cantor $ map f as
to interleave the list of lists.
You like that because it goes diagonally:
sums = runSelect $ do
a <- Select [0..]
b <- Select [0..]
return (a+b)
gives
ghci> take 36 sums
[0,1,1,2,2,2,3,3,3,3,4,4,4,4,4,5,5,5,5,5,5,6,6,6,6,6,6,6,7,7,7,7,7,7,7,7]
so it's pleasingly keeping the "sizes" in order, but the pattern appears to be broken for triples, and you doubt completeness, but you needn't. It's doing the same trick, but twice, rather than for all three at once:
triplePairs = runSelect $ do
a <- Select [0..]
b <- Select [0..]
c <- Select [0..]
return $ (a,(b,c))
The second pair is treated as a single source of data, so notice that:
ghci> map fst $ take 36 pairs
[0,0,1,0,1,2,0,1,2,3,0,1,2,3,4,0,1,2,3,4,5,0,1,2,3,4,5,6,0,1,2,3,4,5,6,7]
ghci> map fst $ take 36 triplePairs
[0,0,1,0,1,2,0,1,2,3,0,1,2,3,4,0,1,2,3,4,5,0,1,2,3,4,5,6,0,1,2,3,4,5,6,7]
and (adding some spaces/newlines for clarity of pattern):
ghci> map snd $ take 36 pairs
[0, 1,0, 2,1,0, 3,2,1,0, 4,3,2,1,0, 5,4,3,2,1,0, 6,5,4,3,2,1,0, 7,6,5,4,3,2,1,0]
ghci> map snd $ take 36 triplePairs
[(0,0), (0,1),(0,0), (1,0),(0,1),(0,0), (0,2),(1,0),(0,1),(0,0),
(1,1),(0,2),(1,0),(0,1),(0,0),
(2,0),(1,1),(0,2),(1,0),(0,1),(0,0),
(0,3),(2,0),(1,1),(0,2),(1,0),(0,1),(0,0),
(1,2),(0,3),(2,0),(1,1),(0,2),(1,0),(0,1),(0,0)]
so you can see it's using exactly the same pattern. This doesn't preserve total sums and it oughtn't because we're getting to three dimensions by flattening two dimensions first before flattening the third in. The pattern is obscured, but it's just as guaranteed to make it to the end of the list.
Sadly if you want to do three dimensions in a sum-preserving way, you'll have to write cantor2, cantor3 and cantor4 functions, possibly a cantorN function, but you'll have to ditch the monadic interface, which is inherently based on the bracketing of >>=, hence two-at-a-time flattening of dimensions.
import Control.Applicative
import Control.Arrow
data Select a = Select [a]
| Selects [Select a]
instance Functor Select where
fmap f (Select x) = Select $ map f x
fmap f (Selects xss) = Selects $ map (fmap f) xss
instance Applicative Select where
pure = Select . (:[])
Select fs <*> xs = Selects $ map (`fmap`xs) fs
Selects fs <*> xs = Selects $ map (<*>xs) fs
instance Monad Select where
return = pure
Select xs >>= f = Selects $ map f xs
Selects xs >>= f = Selects $ map (>>=f) xs
runSelect :: Select a -> [a]
runSelect = go 1
where go n xs = uncurry (++) . second (go $ n+1) $ splitOff n xs
splitOff n (Select xs) = second Select $ splitAt n xs
splitOff n (Selects sls) = (concat hs, Selects $ tsl ++ rl)
where ((hs, tsl), rl) = first (unzip . map (splitOff n)) $ splitAt n sls
*Select> take 15 . runSelect $ do { a<‌-Select [0..]; b<‌-Select [0..]; return (a,b) }
[(0,0),(0,1),(1,0),(1,1),(0,2),(1,2),(2,0),(2,1),(2,2),(0,3),(1,3),(2,3),(3,0),(3,1),(3,2)]
*Select> take 15 . runSelect $ do { a<‌-Select [0..]; b<‌-Select [0..]; c<‌-Select [0..]; return (a,b,c) }
[(0,0,0),(0,0,1),(0,1,0),(0,1,1),(1,0,0),(1,0,1),(1,1,0),(1,1,1),(0,0,2),(0,1,2),(0,2,0),(0,2,1),(0,2,2),(1,0,2),(1,1,2)]
Note that this is still not quite Cantor-tuples ((0,1,1) shouldn't come before (1,0,0)), but getting it correct would be possible as well in a similar manner.
A correct multidimentional enumerator could be represented with a temporary state object
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE OverlappingInstances #-}
class Space a b where
slice :: a -> ([b], a)
instance Space [a] a where
slice (l:ls) = ([l], ls)
slice [] = ([], [])
instance (Space sp x) => Space ([sp], [sp]) x where
slice (fs, b:bs) = let
ss = map slice (b : fs)
yield = concat $ map fst ss
in (yield, (map snd ss, bs))
Here an N dimensional space is represented by a tuple of lists of N-1 dimensional subspaces that have and haven't been touched by the enumeration.
You can then use the following to produce a well ordered list
enumerate :: (Space sp x) => sp -> [x]
enumerate sp = let (sl, sp') = slice sp
in sl ++ enumerate sp'
Example in Ideone.
The omega package does exactly what you want and guarantees that every element will be eventually visited:
import Control.Applicative
import Control.Monad.Omega
main = print . take 200 . runOmega $
(,,) <$> each [0..] <*> each [0..] <*> each [0..]
Another option would be to use LogicT. It gives more flexibility (if you need) and has operations such as (>>-) that ensure that every combination is eventually encountered.
import Control.Applicative
import Control.Monad
import Control.Monad.Logic
-- | Convert a list into any MonadPlus.
each :: (MonadPlus m) => [a] -> m a
each = msum . map return
-- | A fair variant of '(<*>)` that ensures that both branches are explored.
(<#>) :: (MonadLogic m) => m (a -> b) -> m a -> m b
(<#>) f k = f >>- (\f' -> k >>- (\k' -> return $ f' k'))
infixl 4 <#>
main = print . observeMany 200 $
(,,) <$> each [0..] <#> each [0..] <#> each [0..]

Function to show the lowest represented element in a list

If you have a list such as this in Haskell:
data TestType = A | B | C deriving (Ord, Eq, Show)
List1 :: [TestType]
List1 = [A,B,C,B,C,A,B,C,C,C]
Is it possible to write a function to determin which element is represented the least in a list (so in this case 'A')
My initial thought was to write a helper function such as this but now I am not sure if this is the right approach:
appears :: TestType -> [TestType] -> Int
appears _ [] = 0
appears x (y:ys) | x==y = 1 + (appears x ys)
| otherwise = appears x ys
I am still fairly new to Haskell, so apologies for the potentially silly question.
Many thanks
Slightly alternative version to Matt's approach
import Data.List
import Data.Ord
leastFrequent :: Ord a => [a] -> a
leastFrequent = head . minimumBy (comparing length) . group . sort
You can build a map counting how often each item occurs in the list
import qualified Data.Map as Map
frequencies list = Map.fromListWith (+) $ zip list (repeat 1)
Then you can find the least/most represented using minimumBy or maximumBy from Data.List on the list of Map.assocs of the frequency map, or even sort it by frequency using sortBy.
module Frequencies where
import Data.Ord
import Data.List
import qualified Data.Map as Map
frequencyMap :: Ord a => [a] -> Map.Map a Int
frequencyMap list = Map.fromListWith (+) $ zip list (repeat 1)
-- Caution: leastFrequent will cause an error if called on an empty list!
leastFrequent :: Ord a => [a] -> a
leastFrequent = fst . minimumBy (comparing snd) . Map.assocs . frequencyMap
ascendingFrequencies :: Ord a => [a] -> [(a,Int)]
ascendingFrequencies = sortBy (comparing snd) . Map.assocs . frequencyMap
Here's another way to do it:
sort the list
group the list
find the length of each group
return the group with the shortest length
Example:
import GHC.Exts
import Data.List
fewest :: (Eq a) => [a] -> a
fewest xs = fst $ head sortedGroups
where
sortedGroups = sortWith snd $ zip (map head groups) (map length groups)
groups = group $ sort xs
A less elegant idea would be:
At first sort and group the list
then pairing the cases with their number of representations
at last sort them relative to their num of representations
In code this looks like
import Data.List
sortByRepr :: (Ord a) => [a] ->[(a,Int)]
sortByRepr xx = sortBy compareSnd $ map numOfRepres $ group $ sort xx
where compareSnd x y = compare (snd x) (snd y)
numOfRepres x = (head x, length x)
the least you get by applying head to the resulting list.

Resources