Related
I'm looking for a Haskell container type like Data.Map that uses intervals as keys, where the left-most and right-most keys may also be unbounded intervals, but are otherwise non-overlapping. Additionally, the container should support a function similar to zipWith that allows to merge two containers into a new one, using the intersection of both key sets as the new key set and the argument function for a pointwise combination of both value sets.
There already are several packages that provide interval-based maps. I've had a look at IntervalMap, fingertree and SegmentTree, but none of these packages seem to provide the desired combination function. They all seem to use intervals for the intersection functions, that are equal in both maps, while I need a version that breaks intervals down into smaller ones if necessary.
The container should basically provide an efficient and storable mapping for key/value series of the form Ord k => k -> Maybe a, i.e. functions only defined on specific intervals or having larger intervals mapping to the same value.
Here is a small example to demonstrate the issue:
... -4 -3 -2 -1 0 1 2 3 4 ... -- key set
-----------------------------------
... -1 -1 -1 -1 0 1 1 1 1 ... -- series corresponding to signum
... 5 5 5 5 5 5 5 5 5 ... -- series corresponding to const 5
The first series could be efficiently expressed by a mapping [-infinity, -1] -> -1; [0, 0] -> 0; [1, infinity] -> 1 and the second one by [-infinity, infinity] -> 5. Now applying a combination function with (*) as arument function should give a new series
... -4 -3 -2 -1 0 1 2 3 4 ... -- key set
-----------------------------------
... -5 -5 -5 -5 0 5 5 5 5 ... -- combined series
The crucial point here—and all of the afore-mentioned packages don't seem to be able to do that—is that, when combining the key sets for these two series, you have to take the different values also into account. Both series span the full range of [-infinity, infinity] but it's necessary to break it into three parts for the final series.
There are also packages for working with intervals, e.g. the range package, which also provides an intersection operation on lists of intervals. However, I didn't found a way to use that in combination with one of the Map variants because it collapses adjacents intervals when doing calculations with them.
NB: Such a container is somewhat similar to a ZipList that extends to both sides, which is why I think it should also be possible to define a lawful Applicative instance for it, where <*> corresponds to the above-mentioned combining function.
To cut a long story short, is there already a package that provides such a container? Or is there an easy way to use the existing packages to build one?
The best suggestion from the comments above seems to be the step-function package, as suggested by B. Mehta. I haven't tried that package yet, but it looks like building a wrapper around that SF type is what I was looking for.
Meanwhile, I implemented another solution which I'd like to share. The code for the combining function (combineAscListWith in the code below) is a bit clumsy as it's more general than for just getting the intersection of both maps, so I'll sketch the idea:
First we need an Interval type with an Ord instance which stores pairs of Val a values which can either be -infinity, some value x or +infinity. Form that we can build an IntervalMap which is just a normal Map that maps these intervals to the final values.
When combining two such IntervalMaps by intersection, we first convert the maps into lists of key/value pairs. Next we traverse both lists in parallel to zip both lists into another one which corresponds to the final intersection map. There are two main cases when combining the list elements:
Both left-most intervals start at the same value. In that case we found an interval that actually overlaps/intersects. We clip the longer interval to the shorter one, and use the values associated with the two intervals to get the result value, which now—together with the shorter interval—goes into the result list. The rest of the longer interval goes back to the input lists.
One of the intervals starts at a smaller value than the other, which means we found a part of the two series that do not overlap. So for the intersection, all of the non-overlapping part of the interval (or even the whole interval) can be discared. The rest (if any) goes back to the input list.
For completeness, here's the full example code. Again, the code is rather clumsy; a step-function-based implementation would certainly be more elegant.
import Control.Applicative
import Data.List
import qualified Data.Map as Map
data Val a = NegInf | Val a | Inf deriving (Show, Read, Eq, Ord)
instance Enum a => Enum (Val a) where
succ v = case v of
NegInf -> NegInf
Val x -> Val $ succ x
Inf -> Inf
pred v = case v of
NegInf -> NegInf
Val x -> Val $ pred x
Inf -> Inf
toEnum = Val . toEnum
fromEnum (Val x) = fromEnum x
data Interval a = Interval { lowerBound :: Val a, upperBound :: Val a } deriving (Show, Read, Eq)
instance Ord a => Ord (Interval a) where
compare ia ib = let (a, a') = (lowerBound ia, upperBound ia)
(b, b') = (lowerBound ib, upperBound ib)
in case () of
_ | a' < b -> LT
_ | b' < a -> GT
_ | a == b && a' == b' -> EQ
_ -> error "Ord.Interval.compare: undefined for overlapping intervals"
newtype IntervalMap i a = IntervalMap { unIntervalMap :: Map.Map (Interval i) a }
deriving (Show, Read)
instance Functor (IntervalMap i) where
fmap f = IntervalMap . fmap f . unIntervalMap
instance (Ord i, Enum i) => Applicative (IntervalMap i) where
pure = IntervalMap . Map.singleton (Interval NegInf Inf)
(<*>) = intersectionWith ($)
intersectionWith :: (Ord i, Enum i) => (a -> b -> c)
-> IntervalMap i a -> IntervalMap i b -> IntervalMap i c
intersectionWith f = combineWith (liftA2 f)
combineWith :: (Ord i, Enum i) => (Maybe a -> Maybe b -> Maybe c)
-> IntervalMap i a -> IntervalMap i b -> IntervalMap i c
combineWith f (IntervalMap mpA) (IntervalMap mpB) =
let cs = combineAscListWith f (Map.toAscList mpA) (Map.toAscList mpB)
in IntervalMap $ Map.fromList [ (i, v) | (i, Just v) <- cs ]
combineAscListWith :: (Ord i, Enum i) => (Maybe a -> Maybe b -> c)
-> [(Interval i, a)] -> [(Interval i, b)] -> [(Interval i, c)]
combineAscListWith f as bs = case (as, bs) of
([], _) -> map (\(i, v) -> (i, f Nothing (Just v))) bs
(_, []) -> map (\(i, v) -> (i, f (Just v) Nothing)) as
((Interval a a', va) : as', (Interval b b', vb) : bs')
| a == b -> case () of
_ | a' == b' -> (Interval a a', f (Just va) (Just vb)) : combineAscListWith f as' bs'
_ | a' < b' -> (Interval a a', f (Just va) (Just vb)) : combineAscListWith f as' ((Interval (succ a') b', vb) : bs')
_ | a' > b' -> (Interval a b', f (Just va) (Just vb)) : combineAscListWith f ((Interval (succ b') a', va) : as') bs'
| a < b -> case () of
_ | a' < b -> ((Interval a a', f (Just va) Nothing)) :
(if succ a' == b then id else ((Interval (succ a') (pred b), f Nothing Nothing) :)) (combineAscListWith f as' bs)
_ | True -> (Interval a (pred b), f (Just va) Nothing) : combineAscListWith f ((Interval b a', va) : as') bs
| a > b -> case () of
_ | b' < a -> ((Interval b b', f Nothing (Just vb))) :
(if succ b' == a then id else ((Interval (succ b') (pred a), f Nothing Nothing) :)) (combineAscListWith f as bs')
_ | True -> (Interval b (pred a), f Nothing (Just vb)) : combineAscListWith f as ((Interval a b', vb) : bs')
showIntervalMap :: (Show i, Show a, Eq i) => IntervalMap i a -> String
showIntervalMap = intercalate "; " . map (\(i, v) -> showInterval i ++ " -> " ++ show v)
. Map.toAscList . unIntervalMap
where
showInterval (Interval (Val a) (Val b)) | a == b = "[" ++ show a ++ "]"
showInterval (Interval a b) = "[" ++ showVal a ++ " .. " ++ showVal b ++ "]"
showVal NegInf = "-inf"
showVal (Val x) = show x
showVal Inf = "inf"
main :: IO ()
main = do
let signumMap = IntervalMap $ Map.fromList [(Interval NegInf (Val $ -1), -1),
(Interval (Val 0) (Val 0), 0), (Interval (Val 1) Inf, 1)]
putStrLn $ showIntervalMap $ (*) <$> signumMap <*> pure 5
I have a tuple x and a Maybe value y
x = (1,1)
y = Just 2
I can do this
z = maybe x (\v -> x & _1 .~ v) y
Or I can create my own operator
(.~?) x y = x %~ (\v -> fromMaybe v y)
z = x & _1 .~? y
But if lens doesn't have such operator, maybe I don't need it?
So, how to use lens set function with Maybe?
It appears that you want
maybeSetFst :: (a, b) -> Maybe a -> (a, b)
which will update the first field if given an update value and will leave it alone otherwise. I think the first implementation you give is very good, but you can give it a more general type:
maybeSetFst :: Field1 s s a a => s -> Maybe a -> s
If you don't want that generality, you can skip the lenses and write (using TupleSections)
maybeSetFst p#(_,b) = maybe p (,b)
Another option is to apply maybe to get the update function:
maybeSetFst p m = maybe id (_1 .~) m p
which can be written
maybeSetFst = flip $ maybe id (_1 .~)
for point-free silliness.
I am trying to understand "Löb and möb: strange loops in Haskell", but right now the meaning is sleaping away from me, I just don't see why it could be useful. Just to recall function loeb is defined as
loeb :: Functor f => f (f a -> a) -> f a
loeb x = go where go = fmap ($ go) x
or equivalently:
loeb x = go
where go = fmap (\z -> z go) x
In the article there is an example with [] functor and spreadsheets implementation, but it is bit foreign for me just as spreadsheets themselves (never used them).
While I'm understanding that spreadsheet thing, I think it would help a lot for me and others to have more examples, despite lists. Is there any application for loeb for Maybe or other functors?
The primary source (I think) for loeb is Dan Piponi's blog, A Neighborhood of Infinity. There he explains the whole concept in greater detail. I'll replicate a little bit of that as an answer and add some examples.
loeb implements a strange kind of lazy recursion
loeb :: Functor a => a (a x -> x) -> a x
loeb x = fmap (\a -> a (loeb x)) x
Let's imagine we have a type a, where Functor a, and an a-algebra (a function of type a x -> x). You might think of this as a way of computing a value from a structure of values. For instance, here are a few []-algebras:
length :: [Int] -> Int
(!! 3) :: [a] -> a
const 3 :: Num a => [a] -> a
\l -> l !! 2 + l !! 3 :: Num a => [a] -> a
We can see that these a-algebras can use both values stored in the Functor and the structure of the Functor itself.
Another way to think of d :: a x -> x is as a value of x which requires some context–a whole Functorized value a x–in order to be computed. Perhaps this interpretation is more clearly written as Reader (a x) x, emphasizing that this is just a value of x which is delayed, awaiting the a x context to be produced.
type Delay q x = q -> x
Using these ideas we can describe loeb as follows. We're given a f-structure containing some Delayed values, where f is a Functor
Functor f, f (Delay q x)
Naturally, if we were given a q then we could convert this into a not delayed form. In fact, there's only one (non-cheating) function that does this polymorphically:
force :: Functor f => f (Delay q x) -> q -> f x
force f q = fmap ($ q) f
What loeb does is handle the extra tricky case where q is actually force f q, the very result of this function. If you're familiar with fix, this is exactly how we can produce this result.
loeb :: Functor a => a (Delay (a x) x) -> a x
loeb f = fix (force f)
So to make an example, we simply must build a structure containing Delayed values. One natural example of this is to use the list examples from before
> loeb [ length :: [Int] -> Int
, const 3 :: [Int] -> Int
, const 5 :: [Int] -> Int
, (!! 2) :: [Int] -> Int
, (\l -> l !! 2 + l !! 3) :: [Int] -> Int
]
[5, 3, 5, 5, 10]
Here we can see that the list is full of values delayed waiting on the result of evaluating the list. This computation can proceed exactly because there are no loops in data dependency, so the whole thing can just be determined lazily. For instance, const 3 and const 5 are both immediately available as values. length requires that we know the length of the list but none of the values contained so it also proceeds immediately on our fixed-length list. The interesting ones are the values delayed waiting on other values from inside our result list, but since (!! 2) only ends up depending on the third value of the result list, which is determined by const 5 and thus can be immediately available, the computation moves forward. The same idea happens with (\l -> l !! 2 + l !! 3).
So there you have it: loeb completes this strange kind of delayed value recursion. We can use it on any kind of Functor, though. All we need to do is to think of some useful Delayed values.
Chris Kuklewicz's comment notes that there's not a lot you could do interestingly with Maybe as your functor. That's because all of the delayed values over Maybe take the form
maybe (default :: a) (f :: a -> a) :: Maybe a -> a
and all of the interesting values of Maybe (Delay (Maybe a) a) ought to be Just (maybe default f) since loeb Nothing = Nothing. So at the end of the day, the default value never even gets used---we always just have that
loeb (Just (maybe default f)) == fix f
so we may as well write that directly.
You can use it for dynamic programming. The example that comes to mind is the Smith-Waterman algorithm.
import Data.Array
import Data.List
import Control.Monad
data Base = T | C | A | G deriving (Eq,Show)
data Diff = Sub Base Base | Id Base | Del Base | Ins Base deriving (Eq,Show)
loeb x = let go = fmap ($ go) x in go
s a b = if a == b then 1 else 0
smithWaterman a' b' = let
[al,bl] = map length [a',b']
[a,b] = zipWith (\l s -> array (1,s) $ zip [1..] l) [a',b'] [al,bl]
h = loeb $ array ((0,0),(al,bl)) $
[((x,0),const 0) | x <- [0 .. al]] ++
[((0,y),const 0) | y <- [1 .. bl]] ++
[((x,y),\h' -> maximum [
0,
(h' ! (x - 1,y - 1)) + s (a ! x) (b ! y),
(h' ! (x - 1, y)) + 1,
(h' ! (x, y - 1)) + 1
]
) | x <- [1 .. al], y <- [1 .. bl]]
ml l (0,0) = l
ml l (x,0) = ml (Del (a ! x): l) (x - 1, 0)
ml l (0,y) = ml (Ins (b ! y): l) (0, y - 1)
ml l (x,y) = let
(p,e) = maximumBy ((`ap` snd) . (. fst) . (const .) . (. (h !)) . compare . (h !) . fst) [
((x - 1,y),Del (a ! x)),
((y, x - 1),Ins (b ! y)),
((y - 1, x - 1),if a ! x == b ! y then Id (a ! x) else Sub (a ! x) (b ! y))
]
in ml (e : l) p
in ml [] (al,bl)
Here is a live example where it is used for: Map String Float
http://tryplayg.herokuapp.com/try/spreadsheet.hs/edit
with loop detection and loop resolution.
This program calculates speed, time and space. Each one depends on the other two. Each cell has two values: his current entered value and the expression as a function of the other cell values/expressions. circularity is permitted.
The Cell recalculation code uses the famous loeb expression by Dan Piponi in the 2006. Until now by my knowledge there haven't been any materialization of this formula on a real working spreadsheet. this one is close to it. Since loeb enters in a infinite loop when circular expressions are used, the program counts the loops and reduces complexity by progressively substituting formulas by cell values until the expression has no loops
This program is configured for immediate recalculation on cell change, but that can be adapted to allow the modification of more than one cell before recalculation by triggering it by means of a button.
This is blog pos:
http://haskell-web.blogspot.com.es/2014/09/spreadsheet-like-program-in-browser.html
I have a function which computes a fixed point in terms of iterate:
equivalenceClosure :: (Ord a) => Relation a -> Relation a
equivalenceClosure = fst . List.head -- "guaranteed" to exist
. List.dropWhile (uncurry (/=)) -- removes pairs that are not equal
. U.List.pairwise (,) -- applies (,) to adjacent list elements
. iterate ( reflexivity
. symmetry
. transitivity
)
Notice that we can abstract from this to:
findFixedPoint :: (a -> a) -> a -> a
findFixedPoint f = fst . List.head
. List.dropWhile (uncurry (/=)) -- dropWhile we have not reached the fixed point
. U.List.pairwise (,) -- applies (,) to adjacent list elements
. iterate
$ f
Can this function be written in terms of fix? It seems like there should be a transformation from this scheme to something with fix in it, but I don't see it.
There's quite a bit going on here, from the mechanics of lazy evaluation, to the definition of a fixed point to the method of finding a fixed point. In short, I believe you may be incorrectly interchanging the fixed point of function application in the lambda calculus with your needs.
It may be helpful to note that your implementation of finding the fixed-point (utilizing iterate) requires a starting value for the sequence of function application. Contrast this to the fix function, which requires no such starting value (As a heads up, the types give this away already: findFixedPoint is of type (a -> a) -> a -> a, whereas fix has type (a -> a) -> a). This is inherently because the two functions do subtly different things.
Let's dig into this a little deeper. First, I should say that you may need to give a little bit more information (your implementation of pairwise, for example), but with a naive first-try, and my (possibly flawed) implementation of what I believe you want out of pairwise, your findFixedPoint function is equivalent in result to fix, for a certain class of functions only
Let's take a look at some code:
{-# LANGUAGE RankNTypes #-}
import Control.Monad.Fix
import qualified Data.List as List
findFixedPoint :: forall a. Eq a => (a -> a) -> a -> a
findFixedPoint f = fst . List.head
. List.dropWhile (uncurry (/=)) -- dropWhile we have not reached the fixed point
. pairwise (,) -- applies (,) to adjacent list elements
. iterate f
pairwise :: (a -> a -> b) -> [a] -> [b]
pairwise f [] = []
pairwise f (x:[]) = []
pairwise f (x:(xs:xss)) = f x xs:pairwise f xss
contrast this to the definition of fix:
fix :: (a -> a) -> a
fix f = let x = f x in x
and you'll notice that we're finding a very different kind of fixed-point (i.e. we abuse lazy evaluation to generate a fixed point for function application in the mathematical sense, where we only stop evaluation iff* the resulting function, applied to itself, evaluates to the same function).
For illustration, let's define a few functions:
lambdaA = const 3
lambdaB = (*)3
and let's see the difference between fix and findFixedPoint:
*Main> fix lambdaA -- evaluates to const 3 (const 3) = const 3
-- fixed point after one iteration
3
*Main> findFixedPoint lambdaA 0 -- evaluates to [const 3 0, const 3 (const 3 0), ... thunks]
-- followed by grabbing the head.
3
*Main> fix lambdaB -- does not stop evaluating
^CInterrupted.
*Main> findFixedPoint lambdaB 0 -- evaluates to [0, 0, ...thunks]
-- followed by grabbing the head
0
now if we can't specify the starting value, what is fix used for? It turns out that by adding fix to the lambda calculus, we gain the ability to specify the evaluation of recursive functions. Consider fact' = \rec n -> if n == 0 then 1 else n * rec (n-1), we can compute the fixed point of fact' as:
*Main> (fix fact') 5
120
where in evaluating (fix fact') repeatedly applies fact' itself until we reach the same function, which we then call with the value 5. We can see this in:
fix fact'
= fact' (fix fact')
= (\rec n -> if n == 0 then 1 else n * rec (n-1)) (fix fact')
= \n -> if n == 0 then 1 else n * fix fact' (n-1)
= \n -> if n == 0 then 1 else n * fact' (fix fact') (n-1)
= \n -> if n == 0 then 1
else n * (\rec n' -> if n' == 0 then 1 else n' * rec (n'-1)) (fix fact') (n-1)
= \n -> if n == 0 then 1
else n * (if n-1 == 0 then 1 else (n-1) * fix fact' (n-2))
= \n -> if n == 0 then 1
else n * (if n-1 == 0 then 1
else (n-1) * (if n-2 == 0 then 1
else (n-2) * fix fact' (n-3)))
= ...
So what does all this mean? depending on the function you're dealing with, you won't necessarily be able to use fix to compute the kind of fixed point you want. This is, to my knowledge, dependent on the function(s) in question. Not all functions have the kind of fixed point computed by fix!
*I've avoided talking about domain theory, as I believe it would only confuse an already subtle topic. If you're curious, fix finds a certain kind of fixed point, namely the least available fixed point of the poset the function is specified over.
Just for the record, it is possible to define the function findFixedPoint using fix.
As Raeez has pointed out, recursive functions can be defined in terms of fix.
The function that you are interested in can be recursively defined as:
findFixedPoint :: Eq a => (a -> a) -> a -> a
findFixedPoint f x =
case (f x) == x of
True -> x
False -> findFixedPoint f (f x)
This means that we can define it as fix ffp where ffp is:
ffp :: Eq a => ((a -> a) -> a -> a) -> (a -> a) -> a -> a
ffp g f x =
case (f x) == x of
True -> x
False -> g f (f x)
For a concrete example, let us assume that f is defined as
f = drop 1
It is easy to see that for every finite list l we have findFixedPoint f l == [].
Here is how fix ffp would work when the "value argument" is []:
(fix ffp) f []
= { definition of fix }
ffp (fix ffp) f []
= { f [] = [] and definition of ffp }
[]
On the other hand, if the "value argument" is [42], we would have:
fix ffp f [42]
= { definition of fix }
ffp (fix ffp) f [42]
= { f [42] =/= [42] and definition of ffp }
(fix ffp) f (f [42])
= { f [42] = [] }
(fix ffp) f []
= { see above }
[]
Intro
Fixed points are such arguments to a function that it would return unchanged: f x == x. An example would be (\x -> x^2) 1 == 1 -- here the fixed point is 1.
Attractive fixed points are those fixed points that can be found by iteration from some starting point. For example, (\x -> x^2) 0.5 would converge to 0, thus 0 is an attractive fixed point of this function.
Attractive fixed points can be, with luck, approached (and, in some cases, even reached in that many steps) from a suitable non-fixed point by iterating the function from that point. Other times, the iteration will diverge, so there should first be a proof in place that a fixed point will attract the iterating process. For some functions, the proof is common knowledge.
The code
I have tidied up some prior art that accomplishes the task neatly. I then set out to extend the same idea to monadic functions, but to no luck. This is the code I have by now:
module Fix where
-- | Take elements from a list until met two equal adjacent elements. Of those,
-- take only the first one, then be done with it.
--
-- This function is intended to operate on infinite lists, but it will still
-- work on finite ones.
converge :: Eq a => [a] -> [a]
converge = convergeBy (==)
-- \ r a = \x -> (x + a / x) / 2
-- \ -- ^ A method of computing square roots due to Isaac Newton.
-- \ take 8 $ iterate (r 2) 1
-- [1.0,1.5,1.4166666666666665,1.4142156862745097,1.4142135623746899,
-- 1.414213562373095,1.414213562373095,1.414213562373095]
-- \ converge $ iterate (r 2) 1
-- [1.0,1.5,1.4166666666666665,1.4142156862745097,1.4142135623746899,1.414213562373095]
-- | Find a fixed point of a function. May present a non-terminating function
-- if applied carelessly!
fixp :: Eq a => (a -> a) -> a -> a
fixp f = last . converge . iterate f
-- \ fixp (r 2) 1
-- 1.414213562373095
-- | Non-overloaded counterpart to `converge`.
convergeBy :: (a -> a -> Bool) -> [a] -> [a]
convergeBy _ [ ] = [ ]
convergeBy _ [x] = [x]
convergeBy eq (x: xs#(y: _))
| x `eq` y = [x]
| otherwise = x : convergeBy eq xs
-- \ convergeBy (\x y -> abs (x - y) < 0.001) $ iterate (r 2) 1
-- [1.0,1.5,1.4166666666666665,1.4142156862745097]
-- | Non-overloaded counterpart to `fixp`.
fixpBy :: (a -> a -> Bool) -> (a -> a) -> a -> a
fixpBy eq f = last . convergeBy eq . iterate f
-- \ fixpBy (\x y -> abs (x - y) < 0.001) (r 2) 1
-- 1.4142156862745097
-- | Find a fixed point of a monadic function. May present a non-terminating
-- function if applied carelessly!
-- TODO
fixpM :: (Eq a, Monad m) => (m a -> m a) -> m a -> m a
fixpM f = last . _ . iterate f
(It may be loaded in repl. There are examples to be run in the comments, for illustration.)
The problem
There is an _ in the definition of fixpM above. It is a function of type [m a] -> [m a] that should do, in principle, the same as the function converge above, but kinda lifted. I have come to suspect it can't be written.
I do have composed another, specialized code for fixpM:
fixpM :: (Eq a, Monad m) => (a -> m a) -> a -> m a
fixpM f x = do
y <- f x
if x == y
then return x
else fixpM f y
-- \ fixpM (\x -> (".", x^2)) 0.5
-- ("............",0.0)
(An example run is, again, found in a comment.)
-- But it is a whole different algorithm, not an extension / generalization of the pure function we started with. In particular, we do not pass the stage where a list of inits up to the first repetition is made available.
Can we not extend the pure algorithm to work on monadic functions?
And why so?
I would admire a hint towards a piece of theory that explains how to either prove impossibility or construct a solution in a routine fashion, but perhaps this is just a triviality I'm missing while busy typing idle questions, in which case a straightforward counterexample would defeat me.
P.S. I understand this is a somewhat trivial exercise. Still, I want to have become done with it once and forever.
P.S. 2 A better approximation to the pure variant, as suggested by #n-m (retaining iterate), would look like this:
fixpM :: (Eq a, Monad m) => (m a -> m a) -> m a -> m a
fixpM f = collapse . iterate f
where
collapse (mx: mxs #(my: _)) = do
x <- mx
y <- my
if x == y
then return x
else collapse mxs
Through the use of iterate, its behaviour with regard to the monad is different in that the effects are retained between consecutive approximations. Performance-wise, these functions are of the same complexity.
P.S. 3 A more complete rendition of the ideas offered by #n-m encodes the algorithm, as far as I can see, one to one with the pure variant:
fixpM :: (Eq a, Monad m) => (m a -> m a) -> m a -> m a
fixpM f = lastM . convergeM . iterate (f >>= \x -> return x )
convergeM :: (Monad m, Eq a) => [m a] -> m [a]
convergeM = convergeByM (==)
convergeByM :: (Monad m, Eq a) => (a -> a -> Bool) -> [m a] -> m [a]
convergeByM _ [ ] = return [ ]
convergeByM _ [mx] = mx >>= \x -> return [x]
convergeByM eq xs = do
case xs of
[ ] -> return [ ]
[mx] -> mx >>= \x -> return [x]
(mx: mxs #(my: _)) -> do
x <- mx
y <- my
if x `eq` y
then return [x]
else do
xs <- convergeM mxs
return (x:xs)
lastM :: Monad m => m [a] -> m a
lastM mxs = mxs >>= \xs -> case xs of
[] -> error "Fix.lastM: No last element!"
xs -> return . head . reverse $ xs
Unfortunately, it happens to be rather lengthy. More substantially, both these solutions have the same somewhat undesirable behaviour with regard to the effects of the monad: all the effects are retained between consecutive approximations.