Reordering the search space - haskell

I try to find the first occurrence in the search space ordered by descending which satisfy a some predicate.
This strategy was chosen because calculating the predicate can be quite expensive, and the probability of finding a solution among the former is quite high.
Here is the solution, which first builds a list of all possible solutions, then arranges and produces a linear search.
import Data.Ord
import Data.List
search :: (Ord a, Num a) => ([a] -> Bool) -> [[a]] -> Maybe [a]
search p = find p . sortOn (Down . sum) . sequence
Example
main = print $ search ((<25) . sum) [[10,2], [10,8,6], [8]]
Output
Just [10,6,8]
Question
Is there a way to generate elements of this space in descending order without sorting?

The exact case described
In this exact case, there is a clear best element in the space, and if any element matches the predicate then the best one does:
-- I have, over the years, found many uses for ensure
ensure p x = x <$ guard p x
search p = ensure p . map minimum
(<25) . sum is a placeholder, but Down . sum is exact
If your predicate is just an example, but your heuristic is really summing, you can use a priority queue to search through the space. For simplicity, I'll use [(b,a)] as my priority queue with priorities b and values a, maintaining the invariant that the list is sorted by b. Of course you should use a better implementation if you want efficiency.
And now we basically just reimplement sequence to produce its elements in priority order and maintain the sum of the lists it produces as their priority. Introducing the priority queue invariant is a small one-time cost up front.
import Data.List
import Data.Ord
increasingSums :: (Ord a, Num a) => [[a]] -> [[a]]
increasingSums = map snd . go . map sort where
go [] = [(0,[])]
go (xs:xss) = let recurse = go xss in mergeOn fst
[ [ (total+h, h:ts)
| (total, ts) <- recurse
]
| h <- xs
]
The only thing missing is mergeOn, which flattens a collection of priority queues into a single one:
mergeOn :: Ord b => (a -> b) -> [[a]] -> [a]
mergeOn f = go . sortOn (f . head) . filter (not . null) where
go [] = []
go ([x]:xss) = x : go xss
go ((x:xs):xss) = x : go (insertBy (comparing (f . head)) xs xss)
Testing in ghci, we can see that this expression finishes in a non-stupid amount of time:
> take 10 . increasingSums . replicate 4 $ [1..1000]
[[1,1,1,1],[2,1,1,1],[1,2,1,1],[1,1,2,1],[1,1,1,2],[2,1,1,2],[1,2,1,2],[1,1,2,2],[1,1,1,3],[2,1,2,1]]
Whereas this expression does not:
> take 10 . sortOn sum . sequence . replicate 4 $ [1..1000]
^C^C^C^COMG how do I quit
Meanwhile it is also competitive for producing the complete list of sums in sorted order (at least before compilation, I didn't test whether the optimized versions are also about equal):
> :set +s
> sum . map sum . increasingSums . replicate 4 $ [1..30]
50220000
(1.99 secs, 1,066,135,432 bytes)
> sum . map sum . sortOn sum . sequence . replicate 4 $ [1..30]
50220000
(2.60 secs, 2,226,497,344 bytes)
Down . sum is a placeholder
Finally, if your heuristic is just an example, and you want a fully general solution that will work for all heuristics, you're out of luck. Doing a structured walk through your search space requires knowing something special about that structure to exploit. (For example, above we know that if x<y then total+x<total+y, and we exploit this to cheaply maintain our priority queue.)

Related

Benchmarking Filter and Partition

I was testing the performance of the partition function for lists and got some strange results, I think.
We have that partition p xs == (filter p xs, filter (not . p) xs) but we chose the first implementation because it only performs a single traversal over the list. Yet, the results I got say that it maybe be better to use the implementation that uses two traversals.
Here is the minimal code that shows what I'm seeing
import Criterion.Main
import System.Random
import Data.List (partition)
mypartition :: (a -> Bool) -> [a] -> ([a],[a])
mypartition p l = (filter p l, filter (not . p) l)
randList :: RandomGen g => g -> Integer -> [Integer]
randList gen 0 = []
randList gen n = x:xs
where
(x, gen') = random gen
xs = randList gen' (n - 1)
main = do
gen <- getStdGen
let arg10000000 = randList gen 10000000
defaultMain [
bgroup "filters -- split list in half " [
bench "partition100" $ nf (partition (>= 50)) arg10000000
, bench "mypartition100" $ nf (mypartition (>= 50)) arg10000000
]
]
I ran the tests both with -O and without it and both times I get that the double traversals is better.
I am using ghc-7.10.3 with criterion-1.1.1.0
My questions are:
Is this expected?
Am I using Criterion correctly? I know that laziness can be tricky and (filter p xs, filter (not . p) xs) will only do two traversals if both elements of the tuple are used.
Does this has to do something with the way lists are handled in Haskell?
Thanks a lot!
There is no black or white answer to the question. To dissect the problem consider the following code:
import Control.DeepSeq
import Data.List (partition)
import System.Environment (getArgs)
mypartition :: (a -> Bool) -> [a] -> ([a],[a])
mypartition p l = (filter p l, filter (not . p) l)
main :: IO ()
main = do
let cnt = 10000000
xs = take cnt $ concat $ repeat [1 .. 100 :: Int]
args <- getArgs
putStrLn $ unwords $ "Args:" : args
case args of
[percent, fun]
-> let p = (read percent >=)
in case fun of
"partition" -> print $ rnf $ partition p xs
"mypartition" -> print $ rnf $ mypartition p xs
"partition-ds" -> deepseq xs $ print $ rnf $ partition p xs
"mypartition-ds" -> deepseq xs $ print $ rnf $ mypartition p xs
_ -> err
_ -> err
where
err = putStrLn "Sorry, I do not understand."
I do not use Criterion to have a better control about the order of evaluation. To get timings, I use the +RTS -s runtime option. The different test case are executed using different command line options. The first command line option defines for which percentage of the data the predicate holds. The second command line option chooses between different tests.
The tests distinguish two cases:
The data is generated lazily (2nd argument partition or mypartition).
The data is already fully evaluated in memory (2nd argument partition-ds or mypartition-ds).
The result of the partitioning is always evaluated from left to right, i.e. starting with the list that contains all the elements for which the predicate holds.
In case 1 partition has the advantage that elements of the first resulting list get discarded before all elements of the input list were even produced. Case 1 is especially good, if the predicate matches many elements, i.e. the first command line argument is large.
In case 2, partition cannot play out this advantage, since all elements are already in memory.
For mypartition, in any case all elements are held in memory after the first resulting list is evaluated, because they are needed again to compute the second resulting list. Therefore there is not much of a difference between the two cases.
It seems, the more memory is used, the harder garbage collection gets. Therefore partition is well suited, if the predicate matches many elements and the lazy variant is used.
Conversely, if the predicate does not match many elements or all elements are already in memory, mypartition performs better, since its recursion does not deal with pairs in contrast to partition.
The Stackoverflow question “Irrefutable pattern does not leak memory in recursion, but why?” might give some more insights about the handling of pairs in the recursion of partition.

Directly generating specific subsets of a powerset?

Haskell's expressiveness enables us to rather easily define a powerset function:
import Control.Monad (filterM)
powerset :: [a] -> [[a]]
powerset = filterM (const [True, False])
To be able to perform my task it is crucial for said powerset to be sorted by a specific function, so my implementation kind of looks like this:
import Data.List (sortBy)
import Data.Ord (comparing)
powersetBy :: Ord b => ([a] -> b) -> [a] -> [[a]]
powersetBy f = sortBy (comparing f) . powerset
Now my question is whether there is a way to only generate a subset of the powerset given a specific start and endpoint, where f(start) < f(end) and |start| < |end|. For example, my parameter is a list of integers ([1,2,3,4,5]) and they are sorted by their sum. Now I want to extract only the subsets in a given range, lets say 3 to 7. One way to achieve this would be to filter the powerset to only include my range but this seems (and is) ineffective when dealing with larger subsets:
badFunction :: Ord b => b -> b -> ([a] -> b) -> [a] -> [[a]]
badFunction start end f = filter (\x -> f x >= start && f x <= end) . powersetBy f
badFunction 3 7 sum [1,2,3,4,5] produces [[1,2],[3],[1,3],[4],[1,4],[2,3],[5],[1,2,3],[1,5],[2,4],[1,2,4],[2,5],[3,4]].
Now my question is whether there is a way to generate this list directly, without having to generate all 2^n subsets first, since it will improve performance drastically by not having to check all elements but rather generating them "on the fly".
If you want to allow for completely general ordering-functions, then there can't be a way around checking all elements of the powerset. (After all, how would you know the isn't a special clause built in that gives, say, the particular set [6,8,34,42] a completely different ranking from its neighbours?)
However, you could make the algorithm already drastically faster by
Only sorting after filtering: sorting is O (n · log n), so you want keep n low here; for the O (n) filtering step it matters less. (And anyway, number of elements doesn't change through sorting.)
Apply the ordering-function only once to each subset.
So
import Control.Arrow ((&&&))
lessBadFunction :: Ord b => (b,b) -> ([a]->b) -> [a] -> [[a]]
lessBadFunction (start,end) f
= map snd . sortBy (comparing fst)
. filter (\(k,_) -> k>=start && k<=end)
. map (f &&& id)
. powerset
Basically, let's face it, powersets of anything but a very small basis are infeasible. The particular application “sum in a certain range” is pretty much a packaging problem; there are quite efficient ways to do that kind of thing, but you'll have to give up the idea of perfect generality and of quantification over general subsets.
Since your problem is essentially a constraint satisfaction problem, using an external SMT solver might be the better alternative here; assuming you can afford the extra IO in the type and the need for such a solver to be installed. The SBV library allows construction of such problems. Here's one encoding:
import Data.SBV
-- c is the cost type
-- e is the element type
pick :: (Num e, SymWord e, SymWord c) => c -> c -> ([SBV e] -> SBV c) -> [e] -> IO [[e]]
pick begin end cost xs = do
solutions <- allSat constraints
return $ map extract $ extractModels solutions
where extract ts = [x | (t, x) <- zip ts xs, t]
constraints = do tags <- mapM (const free_) xs
let tagged = zip tags xs
finalCost = cost [ite t (literal x) 0 | (t, x) <- tagged]
solve [finalCost .>= literal begin, finalCost .<= literal end]
test :: IO [[Integer]]
test = pick 3 7 sum [1,2,3,4,5]
We get:
Main> test
[[1,2],[1,3],[1,2,3],[1,4],[1,2,4],[1,5],[2,5],[2,3],[2,4],[3,4],[3],[4],[5]]
For large lists, this technique will beat out generating all subsets and filtering; assuming the cost function generates reasonable constraints. (Addition will be typically OK, if you've multiplications, the backend solver will have a harder time.)
(As a side note, you should never use filterM (const [True, False]) to generate power-sets to start with! While that expression is cute and fun, it is extremely inefficient!)

Long working of program that count Ints

I want to write program that takes array of Ints and length and returns array that consist in position i all elements, that equals i, for example
[0,0,0,1,3,5,3,2,2,4,4,4] 6 -> [[0,0,0],[1],[2,2],[3,3],[4,4,4],[5]]
[0,0,4] 7 -> [[0,0],[],[],[],[4],[],[]]
[] 3 -> [[],[],[]]
[2,2] 3 -> [[],[],[2,2]]
So, that's my solution
import Data.List
import Data.Function
f :: [Int] -> Int -> [[Int]]
f ls len = g 0 ls' [] where
ls' = group . sort $ ls
g :: Int -> [[Int]] -> [[Int]] -> [[Int]]
g val [] accum
| len == val = accum
| otherwise = g (val+1) [] (accum ++ [[]])
g val (x:xs) accum
| len == val = accum
| val == head x = g (val+1) xs (accum ++ [x])
| otherwise = g (val+1) (x:xs) (accum ++ [[]])
But query f [] 1000000 works really long, why?
I see we're accumulating over some data structure. I think foldMap. I ask "Which Monoid"? It's some kind of lists of accumulations. Like this
newtype Bunch x = Bunch {bunch :: [x]}
instance Semigroup x => Monoid (Bunch x) where
mempty = Bunch []
mappend (Bunch xss) (Bunch yss) = Bunch (glom xss yss) where
glom [] yss = yss
glom xss [] = xss
glom (xs : xss) (ys : yss) = (xs <> ys) : glom xss yss
Our underlying elements have some associative operator <>, and we can thus apply that operator pointwise to a pair of lists, just like zipWith does, except that when we run out of one of the lists, we don't truncate, rather we just take the other. Note that Bunch is a name I'm introducing for purposes of this answer, but it's not that unusual a thing to want. I'm sure I've used it before and will again.
If we can translate
0 -> Bunch [[0]] -- single 0 in place 0
1 -> Bunch [[],[1]] -- single 1 in place 1
2 -> Bunch [[],[],[2]] -- single 2 in place 2
3 -> Bunch [[],[],[],[3]] -- single 3 in place 3
...
and foldMap across the input, then we'll get the right number of each in each place. There should be no need for an upper bound on the numbers in the input to get a sensible output, as long as you are willing to interpret [] as "the rest is silence". Otherwise, like Procrustes, you can pad or chop to the length you need.
Note, by the way, that when mappend's first argument comes from our translation, we do a bunch of ([]++) operations, a.k.a. ids, then a single ([i]++), a.k.a. (i:), so if foldMap is right-nested (which it is for lists), then we will always be doing cheap operations at the left end of our lists.
Now, as the question works with lists, we might want to introduce the Bunch structure only when it's useful. That's what Control.Newtype is for. We just need to tell it about Bunch.
instance Newtype (Bunch x) [x] where
pack = Bunch
unpack = bunch
And then it's
groupInts :: [Int] -> [[Int]]
groupInts = ala' Bunch foldMap (basis !!) where
basis = ala' Bunch foldMap id [iterate ([]:) [], [[[i]] | i <- [0..]]]
What? Well, without going to town on what ala' is in general, its impact here is as follows:
ala' Bunch foldMap f = bunch . foldMap (Bunch . f)
meaning that, although f is a function to lists, we accumulate as if f were a function to Bunches: the role of ala' is to insert the correct pack and unpack operations to make that just happen.
We need (basis !!) :: Int -> [[Int]] to be our translation. Hence basis :: [[[Int]]] is the list of images of our translation, computed on demand at most once each (i.e., the translation, memoized).
For this basis, observe that we need these two infinite lists
[ [] [ [[0]]
, [[]] , [[1]]
, [[],[]] , [[2]]
, [[],[],[]] , [[3]]
... ...
combined Bunchwise. As both lists have the same length (infinity), I could also have written
basis = zipWith (++) (iterate ([]:) []) [[[i]] | i <- [0..]]
but I thought it was worth observing that this also is an example of Bunch structure.
Of course, it's very nice when something like accumArray hands you exactly the sort of accumulation you need, neatly packaging a bunch of grungy behind-the-scenes mutation. But the general recipe for an accumulation is to think "What's the Monoid?" and "What do I do with each element?". That's what foldMap asks you.
The (++) operator copies the left-hand list. For this reason, adding to the beginning of a list is quite fast, but adding to the end of a list is very slow.
In summary, avoid adding things to the end of a list. Try to always add to the beginning instead. One simple way to do that is to build the list backwards, and then reverse it at the end. A more devious trick is to use "difference lists" (Google it). Another possibility is to use Data.Sequence rather than a list.
The first thing that should be noted is the most obvious way to implement this is use a data structure that allows random access, an array is an obviously choice. Note that you need to add the elements to the array multiple times and somehow "join them".
accumArray is perfect for this.
So we get:
f l i = elems $ accumArray (\l e -> e:l) [] (0,i-1) (map (\e -> (e,e)) l)
And we're good to go (see full code here).
This approach does involve converting the final array back into a list, but that step is very likely faster than say sorting the list, which often involves scanning the list at least a few times for a list of decent size.
Whenever you use ++ you have to recreate the entire list, since lists are immutable.
A simple solution would be to use :, but that builds a reversed list. However that can be fixed using reverse, which results in only building two lists (instead of 1 million in your case).
Your concept of glomming things onto an accumulator is a very useful one, and both MathematicalOrchid and Guvante show how you can use that concept reasonably efficiently. But in this case, there is a simpler approach that is likely also faster. You started with
group . sort $ ls
and this was a very good place to start! You get a list that's almost the one you want, except that you need to fill in some blanks. How can we figure those out? The simplest way, though probably not quite the most efficient, is to work with a list of all the numbers you want to count up to: [0 .. len-1].
So we start with
f ls len = g [0 .. len-1] (group . sort $ ls)
where
?
How do we define g? By pattern matching!
f ls len = g [0 .. len-1] (group . sort $ ls)
where
-- We may or may not have some lists left,
-- but we counted as high as we decided we
-- would
g [] _ = []
-- We have no lists left, so the rest of the
-- numbers are not represented
g ns [] = map (const []) ns
-- This shouldn't be possible, because group
-- doesn't make empty lists.
g _ ([]:_) = error "group isn't working!"
-- Finally, we have some work to do!
g (n:ns) xls#(xl#(x:_):xls')
| n == x = xl : g ns xls'
| otherwise = [] : g ns xls
That was nice, but making the list of numbers isn't free, so you might be wondering how you can optimize it. One method I invite you to try is using your original technique of keeping a separate counter, but following this same sort of structure.

Print elements of list that are repeated in Haskell

I want to print those elements that appear more than once in the list. can you please tell me how can I do that.. I am new to haskell.
for example if I have [1,2,3,3,2,4,5,6,5] that i want to get only [2,3,5] because these are the repeated elements in list.
Another solution: First sort the list, then group equal elements and take only the ones that appear multiple times:
>>> :m + Data.Maybe Data.List
>>> let xs = [1..100000] ++ [8,18..100] ++ [10,132,235]
>>> let safeSnd = listToMaybe . drop 1
>>> mapMaybe safeSnd $ group $ sort xs
[8,10,18,28,38,48,58,68,78,88,98,132,235]
group $ sort xs is a list of lists where each list contains all equal elements.
mapMaybe safe2nd returns only those lists that have a 2nd element (= the orignal element occured more than once in the orginal list).
This is method should be faster than the one using nub, especially for large lists.
Data.Map.Lazy and Data.Map.Strict are host to a bunch of interesting functions for constructing maps (association maps, dictionaries, whatever you want to call them). One of them is fromListWith
fromListWith :: Ord k => (a -> a -> a) -> [(k, a)] -> Map k a
What you want to build is a map that tells you, for each value in your input list, how often it occurs. The values would be the keys of the map (type k), their counts would be the values associated with the keys (type a). You could use the following expression for that:
fromListWith (+) . map (\x -> (x, 1))
First, all values in the list are put into a tuple, together with a count of one. Then, fromListWith builds a map from the list; if a key already exists, it computes a new count using (+).
Once you've done this, you're only interested in the elements that occur more than once. For this, you can use filter (> 1) from Data.Map.
Finally, you just want to know all keys that remain in the map. Use the function keys for this.
In the end, you get the following module:
import qualified Data.Map.Strict as M
findDuplicates :: (Ord a) => [a] -> [a]
findDuplicates
= M.keys
. M.filter (> 1)
. M.fromListWith (+)
. map (\x -> (x, 1 :: Integer))
It's common practice to import certain packages like Data.Map qualified, to avoid name conflicts between modules (e.g. filter from Data.Map and the one from Prelude are very different). In this situation, it's best to choose Data.Map.Strict; see the explanation at the top of Data.Map.
The complexity of this method should be O(n log n).
I thought it could be optimized by using a boolean flag to indicate that the value is a duplicate. However, this turned out to be about 20% slower.
You're basically looking for the list of elements that are not unique, or in other words, the difference between the original list and the list of unique elements. In code:
xs \\ (nub xs)
If you don't want to have duplicates in the result list, you'll want to call nub again:
nub $ xs \\ (nub xs)

Merging an unbound number of ordered infinite sequences

I want to generate all natural numbers together with their decomposition in prime factors, up to a certain threshold.
I came up with the following function:
vGenerate :: [a] -- generator set for monoid B* (Kleene star of B)
-> (a, (a -> a -> a)) -- (identity element, generating function)
-> (a -> Bool) -- filter
-> [a] -- B* filtered
vGenerate [] (g0,_) _ = [g0]
vGenerate (e:es) (g0,g) c =
let coEs = vGenerate es (g0,g) c
coE = takeWhile (c) $ iterate (g e) g0
in concatMap (\m -> takeWhile (c) $ map (g m) coE) coEs
gen then generates all natural numbers together with their prime factors:
gen threshold =
let b = map (\x -> (x,[x])) $ takeWhile (<= threshold) primes
condition = (<= threshold) . fst
g0 = (1,[])
g = \(n,nl)(m,ml) -> ((n*m), nl ++ ml)
in vGenerate b (g0,g) condition
primes = [2,3,5,7,11,.. ] -- pseudo code
I have the following questions:
It is not always known in advance how many numbers we will need. Can we modify vGenerate such that it starts with a lazy infinite list of primes, and generates all the factorizations in increasing order? The challenge is that we have an infinite list of primes, for each prime an infinite list of powers of that prime number, and then have to take all possible combinations. The lists are naturally ordered by increasing first element, so they could be generated lazily.
I documented vGenerate in terms of monoid, with the intention to keep it as abstract as possible, but perhaps this just obfuscates the code? I want to generalize it later (more as an exercise than for real usage), e.g. for generating raster points within certain constraints, which can also be put in the monoid context, so I thought it was a good start to get rid of all references to the problem space (in casu: primes). But I feel that the filtering function does not fit well in the abstraction: the generation must happen in an order that is monotonous for the metric tested by c, because recursion is terminated as soon as c is not satisfied. Any advice?
Have a look at mergeAll :: Ord a => [[a]] -> [a] from the data-ordlist package. It merges an unbound number of infinite sequences as long as the sequences are ordered, and the heads of the sequences are ordered. I've used it for similar problems before, for example to generate all numbers of the form 2^i*3^j.
> let numbers = mergeAll [[2^i*3^j | j <- [0..]] | i <- [0..]]
> take 20 numbers
[1,2,3,4,6,8,9,12,16,18,24,27,32,36,48,54,64,72,81,96]
You should be able to extend this to generate all numbers with their factorizations.

Resources