Directly generating specific subsets of a powerset? - haskell

Haskell's expressiveness enables us to rather easily define a powerset function:
import Control.Monad (filterM)
powerset :: [a] -> [[a]]
powerset = filterM (const [True, False])
To be able to perform my task it is crucial for said powerset to be sorted by a specific function, so my implementation kind of looks like this:
import Data.List (sortBy)
import Data.Ord (comparing)
powersetBy :: Ord b => ([a] -> b) -> [a] -> [[a]]
powersetBy f = sortBy (comparing f) . powerset
Now my question is whether there is a way to only generate a subset of the powerset given a specific start and endpoint, where f(start) < f(end) and |start| < |end|. For example, my parameter is a list of integers ([1,2,3,4,5]) and they are sorted by their sum. Now I want to extract only the subsets in a given range, lets say 3 to 7. One way to achieve this would be to filter the powerset to only include my range but this seems (and is) ineffective when dealing with larger subsets:
badFunction :: Ord b => b -> b -> ([a] -> b) -> [a] -> [[a]]
badFunction start end f = filter (\x -> f x >= start && f x <= end) . powersetBy f
badFunction 3 7 sum [1,2,3,4,5] produces [[1,2],[3],[1,3],[4],[1,4],[2,3],[5],[1,2,3],[1,5],[2,4],[1,2,4],[2,5],[3,4]].
Now my question is whether there is a way to generate this list directly, without having to generate all 2^n subsets first, since it will improve performance drastically by not having to check all elements but rather generating them "on the fly".

If you want to allow for completely general ordering-functions, then there can't be a way around checking all elements of the powerset. (After all, how would you know the isn't a special clause built in that gives, say, the particular set [6,8,34,42] a completely different ranking from its neighbours?)
However, you could make the algorithm already drastically faster by
Only sorting after filtering: sorting is O (n · log n), so you want keep n low here; for the O (n) filtering step it matters less. (And anyway, number of elements doesn't change through sorting.)
Apply the ordering-function only once to each subset.
So
import Control.Arrow ((&&&))
lessBadFunction :: Ord b => (b,b) -> ([a]->b) -> [a] -> [[a]]
lessBadFunction (start,end) f
= map snd . sortBy (comparing fst)
. filter (\(k,_) -> k>=start && k<=end)
. map (f &&& id)
. powerset
Basically, let's face it, powersets of anything but a very small basis are infeasible. The particular application “sum in a certain range” is pretty much a packaging problem; there are quite efficient ways to do that kind of thing, but you'll have to give up the idea of perfect generality and of quantification over general subsets.

Since your problem is essentially a constraint satisfaction problem, using an external SMT solver might be the better alternative here; assuming you can afford the extra IO in the type and the need for such a solver to be installed. The SBV library allows construction of such problems. Here's one encoding:
import Data.SBV
-- c is the cost type
-- e is the element type
pick :: (Num e, SymWord e, SymWord c) => c -> c -> ([SBV e] -> SBV c) -> [e] -> IO [[e]]
pick begin end cost xs = do
solutions <- allSat constraints
return $ map extract $ extractModels solutions
where extract ts = [x | (t, x) <- zip ts xs, t]
constraints = do tags <- mapM (const free_) xs
let tagged = zip tags xs
finalCost = cost [ite t (literal x) 0 | (t, x) <- tagged]
solve [finalCost .>= literal begin, finalCost .<= literal end]
test :: IO [[Integer]]
test = pick 3 7 sum [1,2,3,4,5]
We get:
Main> test
[[1,2],[1,3],[1,2,3],[1,4],[1,2,4],[1,5],[2,5],[2,3],[2,4],[3,4],[3],[4],[5]]
For large lists, this technique will beat out generating all subsets and filtering; assuming the cost function generates reasonable constraints. (Addition will be typically OK, if you've multiplications, the backend solver will have a harder time.)
(As a side note, you should never use filterM (const [True, False]) to generate power-sets to start with! While that expression is cute and fun, it is extremely inefficient!)

Related

Missing Haskell primitive to apply a function to each element of a list successively?

In Haskell, it is well known that the map primitive can be used to apply a given function to all elements of a list:
λ> map toUpper "abcd"
"ABCD"
λ>
While trying to generate all partitions of a finite set (list), the following, similar primitive would be handy:
λ> sap toUpper "abcd"
["Abcd","aBcd","abCd","abcD"]
λ>
with sap standing for successive applications.
The type signature would be:
sap :: (a -> a) -> [a] -> [[a]]
For example, part of the partitions of set "abcd" can be obtained from the partitions of "bcd" by sap'ing them with ('a':).
λ> pbcd = [["b","c","d"],["b","cd"],["bc","d"],["c","bd"],["bcd"]]
λ>
λ> concatMap (sap ('a':)) pbcd
[["ab","c","d"],["b","ac","d"],["b","c","ad"],["ab","cd"],["b","acd"],["abc","d"],["bc","ad"],["ac","bd"],["c","abd"],["abcd"]]
λ>
and the 5 missing partitions can then be obtained by adding 'a' as its own separate singleton.
My problem is that I have been unable to locate such a primitive in the language libraries, and that Hoogle, given the type signature, returns nothing of interest.
Does such a primitive as sap exist somewhere in the Haskell language libraries ???
Or is there a way to write it that is so short and simple that it does not even deserve to be a separate function, putting it below the so-called Fairbairn threshold ?
Footnote:
It is possible to write sap like this:
sap :: (a -> a) -> [a] -> [[a]]
sap fn ls = fst $ foldr op ([], []) ls
where op x (ll,tl) = ( ((fn x):tl) : map (x:) ll , x:tl )
Essentially you start with [[fn (last ls)]] as a seed and then progress leftwards. But this seems pedestrian not simple.
It seems like the simplest version of this is direct recursion:
sap :: (a -> a) -> [a] -> [[a]]
sap _ [] = []
sap f (x:xs) = (f x : xs) : map (x:) (sap f xs)
One possible exploration of this is as a paramorphism, which gives access to the recursive result and the unprocessed remainder together.
sap f = para step where
step Nil = []
step (Cons x (xs, rest)) = (f x : xs) : map (x:) rest
(Not checked, might have silly errors)
I don't see that as a huge improvement though. I don't see any deep insights in that decomposition of recursion from the problem itself.
For that, well... I've used holesOf for a generalized version of this in the past.
sap :: Traversable t => (a -> a) -> t a -> [t a]
sap f = map (peeks f) . holesOf traverse
Now that definitely says something. It has generalized the type to work on all instances of Traversable. On the other hand, the theoretical chunks involved were so overpowered for the end result that I'm not sure what it actually is that it says. On the third(?) hand, it looks pretty.
Or is there a way to write it that is so short and simple that it does not even deserve to be a separate function, putting it below the so-called Fairbairn threshold?
This. The functionality is rarely needed, and the (a -> a) argument doesn't make for a very generic application.
A short and simple implementation can be achieved with list recursion:
sap :: (a -> a) -> [a] -> [[a]]
sap _ [] = []
sap f (x:xs) = (f x:xs):((x:) <$> sap f xs)
I don't think it exists anywhere, although proving it negatively is of course impossible.. Another way to write sap, which I would probably prefer over using foldr,
sap f ls = zipWith (alterWith f) [0..] (iterate ls)
where alterWith f i ls = take i ls ++ f (ls !! i) : drop (i+1) ls
alterWith is available as adjust in https://hackage.haskell.org/package/fft-0.1.8.6/docs/Math-FFT-Base.html#v:adjust, but I would very much not bring something so heavyweight in for that function. I often have something like alterWith defined in a project already, though, and if so that allows sap to be elided in favor of the call to zipWith above.
Exploiting Data.List.HT.splitEverywhere:
import Data.List.HT
sap :: (a -> a) -> [a] -> [[a]]
sap f xs = [ pre ++ f x : post | (pre,x,post) <- splitEverywhere xs]

How to use the Select monad to solve n-queens?

I'm trying to understand how the Select monad works. Apparently, it is a cousin of Cont and it can be used for backtracking search.
I have this list-based solution to the n-queens problem:
-- All the ways of extracting an element from a list.
oneOf :: [Int] -> [(Int,[Int])]
oneOf [] = []
oneOf (x:xs) = (x,xs) : map (\(y,ys) -> (y,x:ys)) (oneOf xs)
-- Adding a new queen at col x, is it threathened diagonally by any of the
-- existing queens?
safeDiag :: Int -> [Int] -> Bool
safeDiag x xs = all (\(y,i) -> abs (x-y) /= i) (zip xs [1..])
nqueens :: Int -> [[Int]]
nqueens queenCount = go [] [1..queenCount]
where
-- cps = columsn of already positioned queens.
-- fps = columns that are still available
go :: [Int] -> [Int] -> [[Int]]
go cps [] = [cps]
go cps fps = [ps | (p,nfps) <- oneOf fps, ps <- go (p:cps) nfps, safeDiag p cps]
I'm struggling to adapt this solution to use Select instead.
It seems that Select lets you abstract over the "evaluation function" that is used to compare answers. That function is passed to runSelect. I have the feeling that something like safeDiag in my solution could work as the evaluation function, but how to structure the Select computation itself?
Also, is it enough to use the Select monad alone, or do I need to use the transformer version over lists?
I realize this is question is almost 4 years old and already has an answer, but I wanted to chime in with some additional information for the sake of anyone who comes across this question in the future. Specifically, I want to try to answer 2 questions:
how are multiple Selects that return single values combined to create a single Select that returns a sequence of values?
is it possible to return early when a solution path is destined to fail?
Chaining Selects
Select is implemented as a monad transformer in the transformers library (go figure), but let's take a look at how one might implement >>= for Select by itself:
(>>=) :: Select r a -> (a -> Select r b) -> Select r b
Select g >>= f = Select $ \k ->
let choose x = runSelect (f x) k
in choose $ g (k . choose)
We start by defining a new Select which takes an input k of type a -> r (recall that Select wraps a function of type (a -> r) -> a). You can think of k as a function that returns a "score" of type r for a given a, which the Select function may use to determine which a to return.
Inside our new Select, we define a function called choose. This function passes some x to the function f, which is the a -> m b portion of monadic binding: it transforms the result of the m a computation into a new computation m b. So f is going to take that x and return a new Select, which choose then runs using our scoring function k. You can think of choose as a function that asks "what would the final result be if I selected x and passed it downstream?"
On the second line, we return choose $ g (k . choose). The function k . choose is the composition of choose and our original scoring function k: it takes in a value, calculates the downstream result of selecting that value, and returns the score of that downstream result. In other words, we've created a kind of "clairvoyant" scoring function: instead of returning the score of a given value, it returns the score of the final result we would get if we selected that value. By passing in our "clairvoyant" scoring function to g (the original Select that we're binding to), we're able to select the intermediate value that leads to the final result we're looking for. Once we have that intermediate value, we simply pass it back into choose and return the result.
That's how we're able to string together single-value Selects while passing in a scoring function that operates on an array of values: each Select is scoring the hypothetical final result of selecting a value, not necessarily the value itself. The applicative instance follows the same strategy, the only difference being how the downstream Select is computed (instead of passing a candidate value into the a -> m b function, it maps a candidate function over the 2nd Select.)
Returning Early
So, how can we use Select while returning early? We need some way of accessing the scoring function within the scope of the code that constructs the Select. One way to do that is to construct each Select within another Select, like so:
sequenceSelect :: Eq a => [a] -> Select Bool [a]
sequenceSelect [] = return []
sequenceSelect domain#(x:xs) = select $ \k ->
if k [] then runSelect s k else []
where
s = do
choice <- elementSelect (x:|xs)
fmap (choice:) $ sequenceSelect (filter (/= choice) domain)
This allows us to test the sequence in progress and short-circuit the recursion if it fails. (We can test the sequence by calling k [] because the scoring function includes all of the prepends that we've recursively lined up.)
Here's the whole solution:
import Data.List
import Data.List.NonEmpty (NonEmpty(..))
import Control.Monad.Trans.Select
validBoard :: [Int] -> Bool
validBoard qs = all verify (tails qs)
where
verify [] = True
verify (x:xs) = and $ zipWith (\i y -> x /= y && abs (x - y) /= i) [1..] xs
nqueens :: Int -> [Int]
nqueens boardSize = runSelect (sequenceSelect [1..boardSize]) validBoard
sequenceSelect :: Eq a => [a] -> Select Bool [a]
sequenceSelect [] = return []
sequenceSelect domain#(x:xs) = select $ \k ->
if k [] then runSelect s k else []
where
s = do
choice <- elementSelect (x:|xs)
fmap (choice:) $ sequenceSelect (filter (/= choice) domain)
elementSelect :: NonEmpty a -> Select Bool a
elementSelect domain = select $ \p -> epsilon p domain
-- like find, but will always return something
epsilon :: (a -> Bool) -> NonEmpty a -> a
epsilon _ (x:|[]) = x
epsilon p (x:|y:ys) = if p x then x else epsilon p (y:|ys)
In short: we construct a Select recursively, removing elements from the domain as we use them and terminating the recursion if the domain has been exhausted or if we're on the wrong track.
One other addition is the epsilon function (based on Hilbert's epsilon operator). For a domain of size N it will check at most N - 1 items... it might not sound like a huge savings, but as you know from the above explanation, p will usually kick off the remainder of the entire computation, so it's best to keep predicate calls to a minimum.
The nice thing about sequenceSelect is how generic it is: it can be used to create any Select Bool [a] where
we're searching within a finite domain of distinct elements
we want to create a sequence that includes every element exactly once (i.e. a permutation of the domain)
we want to test partial sequences and abandon them if they fail the predicate
Hope this helps clarify things!
P.S. Here's a link to an Observable notebook in which I implemented the Select monad in Javascript along with a demonstration of the n-queens solver: https://observablehq.com/#mattdiamond/the-select-monad
Select can be viewed as an abstraction of a search in a "compact" space, guided by some predicate. You mentioned SAT in your comments, have you tried modelling the problem as a SAT instance and throw it at a solver based on Select (in the spirit of this paper)? You can specialise the search to hardwire the N-queens specific constraints inside your and turn the SAT solver into a N-queens solver.
Inspired by jd823592's answer, and after looking at the SAT example in the paper, I have written this code:
import Data.List
import Control.Monad.Trans.Select
validBoard :: [Int] -> Bool
validBoard qs = all verify (tails qs)
where
verify [] = True
verify (x : xs) = and $ zipWith (\i y -> x /= y && abs (x-y) /= i) [1..] xs
nqueens :: Int -> [Int]
nqueens boardSize = runSelect (traverse selectColumn columns) validBoard
where
columns = replicate boardSize [1..boardSize]
selectColumn candidates = select $ \s -> head $ filter s candidates ++ candidates
It seems to arrive (albeit slowly) to a valid solution:
ghci> nqueens 8
[1,5,8,6,3,7,2,4]
I don't understand it very well, however. In particular, the way sequence works for Select, transmuting a function (validBoard) that works over a whole board into functions that take a single column index, seems quite magical.
The sequence-based solution has the defect that putting a queen in a column doesn't rule out the possibility of choosing the same column for subsequent queens; we end up unnecesarily exploring doomed branches.
If we want our column choices to be affected by previous decisions, we need to go beyond Applicative and use the power of Monad:
nqueens :: Int -> [Int]
nqueens boardSize = fst $ runSelect (go ([],[1..boardSize])) (validBoard . fst)
where
go (cps,[]) = return (cps,[])
go (cps,fps) = (select $ \s ->
let candidates = map (\(z,zs) -> (z:cps,zs)) (oneOf fps)
in head $ filter s candidates ++ candidates) >>= go
The monadic version still has the problem that it only checks completed boards, when the original list-based solution backtracked as soon as a partially completed board was found to be have a conflict. I don't know how to do that using Select.

Linking in tree structures

Upon working with long strings now, I came across a rather big problem in creating suffix trees in Haskell.
Some constructing algorithms (as this version of Ukkonen's algorithm) require establishing links between nodes. These links "point" on a node in the tree. In imperative languages, such as Java, C#, etc. this is no problem because of reference types.
Are there ways of emulating this behaviour in Haskell? Or is there a completely different alternative?
You can use a value that isn't determined until the result of a computation in the construction of data in the computation by tying a recursive knot.
The following computation builds a list of values that each hold the total number of items in the list even though the total is computed by the same function that's building the list. The let binding in zipCount passes one of the results of zipWithAndCount as the first argument to zipWithAndCount.
zipCount :: [a] -> [(a, Int)]
zipCount xs =
let (count, zipped) = zipWithAndCount count xs
in zipped
zipWithAndCount :: Num n => b -> [a] -> (n, [(a, b)])
zipWithAndCount y [] = (0, [])
zipWithAndCount y (x:xs) =
let (count', zipped') = zipWithAndCount y xs
in (count' + 1, (x, y):zipped')
Running this example makes a list where each item holds the count of the total items in the list
> zipCount ['a'..'e']
[('a',5),('b',5),('c',5),('d',5),('e',5)]
This idea can be applied to Ukkonen's algorithm by passing in the #s that aren't known until the entire result is known.
The general idea of recursively passing a result into a function is called a least fixed point, and is implemented in Data.Function by
fix :: (a -> a) -> a
fix f = let x = f x in x
We can write zipCount in points-free style in terms of zipWithAndCount and fix.
import Data.Function
zipCount :: [a] -> [(a, Int)]
zipCount = snd . fix . (. fst) . flip zipWithAndCount

Merging an unbound number of ordered infinite sequences

I want to generate all natural numbers together with their decomposition in prime factors, up to a certain threshold.
I came up with the following function:
vGenerate :: [a] -- generator set for monoid B* (Kleene star of B)
-> (a, (a -> a -> a)) -- (identity element, generating function)
-> (a -> Bool) -- filter
-> [a] -- B* filtered
vGenerate [] (g0,_) _ = [g0]
vGenerate (e:es) (g0,g) c =
let coEs = vGenerate es (g0,g) c
coE = takeWhile (c) $ iterate (g e) g0
in concatMap (\m -> takeWhile (c) $ map (g m) coE) coEs
gen then generates all natural numbers together with their prime factors:
gen threshold =
let b = map (\x -> (x,[x])) $ takeWhile (<= threshold) primes
condition = (<= threshold) . fst
g0 = (1,[])
g = \(n,nl)(m,ml) -> ((n*m), nl ++ ml)
in vGenerate b (g0,g) condition
primes = [2,3,5,7,11,.. ] -- pseudo code
I have the following questions:
It is not always known in advance how many numbers we will need. Can we modify vGenerate such that it starts with a lazy infinite list of primes, and generates all the factorizations in increasing order? The challenge is that we have an infinite list of primes, for each prime an infinite list of powers of that prime number, and then have to take all possible combinations. The lists are naturally ordered by increasing first element, so they could be generated lazily.
I documented vGenerate in terms of monoid, with the intention to keep it as abstract as possible, but perhaps this just obfuscates the code? I want to generalize it later (more as an exercise than for real usage), e.g. for generating raster points within certain constraints, which can also be put in the monoid context, so I thought it was a good start to get rid of all references to the problem space (in casu: primes). But I feel that the filtering function does not fit well in the abstraction: the generation must happen in an order that is monotonous for the metric tested by c, because recursion is terminated as soon as c is not satisfied. Any advice?
Have a look at mergeAll :: Ord a => [[a]] -> [a] from the data-ordlist package. It merges an unbound number of infinite sequences as long as the sequences are ordered, and the heads of the sequences are ordered. I've used it for similar problems before, for example to generate all numbers of the form 2^i*3^j.
> let numbers = mergeAll [[2^i*3^j | j <- [0..]] | i <- [0..]]
> take 20 numbers
[1,2,3,4,6,8,9,12,16,18,24,27,32,36,48,54,64,72,81,96]
You should be able to extend this to generate all numbers with their factorizations.

Functor Design Pattern in Haskell

I apologize for not coming up with a good title for this question. I'm having some trouble expressing what I need. I have a simple problem in Haskell and I am wondering what the best approach is to solve it.
Let's say I have a list of numbers: [-3,2,1,2]. I want to return the value with the highest absolute value. That is, I want to return -3. So I want:
f = maximum . map abs
The problem is, of course, that this returns the calculated value (3) and not the original value (-3).
I could figure out a way of doing this, maybe mapping the original list to a tuple of (originalValue, calculatdValue), finding the tuple whose snd is returned by my function (maximum) and then return fst of that tuple.
But this seems like a lot of "plumbing" for a simple problem like this, and I wonder if there is some abstraction I'm missing that solves this. That is, there is this generally procedure I do all the time, and I want some way of neatly doing it:
I want to take a list of items.
I want to map them to a certain value (let's say the absolute value)
Then I want to select one based on some criteria (let's say I want the maximum or maybe the minimum).
But then I want to return the original value. (If the list was [-3,2,1,2] and I want to return the value with the highest abs, then I would return -3).
Is there a library function for this? Is there a functor or a monad for this?
I think I want a function with the signature:
f :: ([b] -> b) -> (a -> b) -> [a] -> a
i.e.
f maximum abs [-3,2,1,2]
This feels very "functory" to me or maybe "monadic".
Use maximumBy which takes a comparison function. You can then pass some function that compares the way you want.
maximumBy (compare `on` abs)
Stop...hoogle time!
So you've got a list of stuff [a]. And you want to end up with just one of those a. You also want to compare elements of this list in some special way (not their natural ordering), in order to determine which comes first. This is the tricky part, but you should be able to see that what I've described is a function of the form a -> a -> Ordering.
Put it all together:
(a -> a -> Ordering) -> [a] -> a
And hoogle it. maximumBy and minimumBy are the first hits :) Hoogle can be a powerful asset when you learn to use it. (See augustss's answer for details on how to use maximumBy in this case)
Another way to do it, if the conversion is a bit expensive:
maximumWith :: (Ord b) => (a -> b) -> [a] -> a
maximumWith f = snd . maximumBy (compare `on` fst) . map (f &&& id)
This type is similar to GHC.Exts's sortWith, which gives us another way to do it:
maximumWith :: (Ord b) => (a -> b) -> [a] -> a
maximumWith f = head . sortWith (Down . f)
We can define a minimumWith similarly:
minimumWith :: (Ord b) => (a -> b) -> [a] -> a
minimumWith f = head . sortWith f
A look at the source for sortWith reveals that it's implemented by sortBy, so it lacks the caching that the first definition for maximumWith had.
This, obviously calls for some benchmarking:
module Main where
import Control.Arrow ((&&&))
import Data.List (sortBy)
import Data.Function (on)
import GHC.Exts (sortWith)
import Criterion.Main
sortWith :: (Ord b) => (a -> b) -> [a] -> [a]
sortWith f = map snd . sortBy (compare `on` fst) . map (f &&& id)
badFib :: Int -> Int
badFib 0 = 1
badFib 1 = 1
badFib n = badFib (n - 1) + badFib (n - 2)
main = defaultMain [ bench "GHC.Exts.sortWith" $ nf (GHC.Exts.sortWith badFib) [0..20]
, bench "Main.sortWith" $ nf (Main.sortWith badFib) [0..20]
]
The results on my laptop:
benchmarking GHC.Exts.sortWith
collecting 100 samples, 12 iterations each, in estimated 1.504415 s
bootstrapping with 100000 resamples
mean: 1.264608 ms, lb 1.260519 ms, ub 1.270248 ms, ci 0.950
std dev: 24.42169 us, lb 19.21734 us, ub 31.50275 us, ci 0.950
found 8 outliers among 100 samples (8.0%)
5 (5.0%) high mild
3 (3.0%) high severe
variance introduced by outliers: 0.996%
variance is unaffected by outliers
benchmarking Main.sortWith
collecting 100 samples, 50 iterations each, in estimated 1.516733 s
bootstrapping with 100000 resamples
mean: 305.9089 us, lb 304.0602 us, ub 310.9257 us, ci 0.950
std dev: 14.41005 us, lb 6.680240 us, ub 30.26940 us, ci 0.950
found 18 outliers among 100 samples (18.0%)
9 (9.0%) high mild
9 (9.0%) high severe
variance introduced by outliers: 0.999%
variance is unaffected by outliers
If you are trying to have something ordered and compared by a projection always, rather than just at a specific usage (in which case see augustss's answer), then use a newtype wrapper:
newtype AbsInt = AbsInt Int
instance Eq AbsInt where
AbsInt x == AbsInt y = abs x == abs y
instance Ord AbsInt where
compare (AbsInt x) (AbsInt y) = compare x y
Now, for example:
maximum [AbsInt 1, AbsInt 10, AbsInt (-50)] = AbsInt (-50)
Presumably you would be working with AbsInt as your objects of study, so you wouldn't be writing those AbsInts everywhere.
The more operations you need on AbsInt, the more boilerplate you need. However if you just want to "pass through" some instances, GHC has an extension GeneralizedNewtypeDeriving that allows that; eg.:
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
newtype AbsInt = AbsInt Int
deriving (Num)
Now AbsInt behaves like an Int with regard to arithmetic, but (given the instances above) by absolute values with regard to comparison. Also note that the Num instance gives you the ability to use literals, so:
(maximum [1,2,-3] :: AbsInt) = AbsInt (-3)
I believe something along the lines of the following should work.
foldl abs_max (head xs) xs
where abs_max x y = if (abs x) > (abs y) then x else y
Looking beyond the task at hand you could generalize it by abstracting out the comparison function and passing it in later.
Here is something I cooked up. It's kind of meh, because it requires (Eq b)
selectOn :: (Eq b) => ([b] -> b) -> (a -> b) -> [a] -> a
selectOn reducer f list = head $ filter (\x -> f(x) == k ) list
where k = reducer $ map f list
And then:
selectOn maximum abs [1,2,-3]
Or:
selectOn sum id [-3, 0, 3]
I guess I can generalize compare on and get the exact same effect.

Resources