Point free notation, recursion, and pattern matching - haskell

So I keep hearing a lot about point free programming and I decided to do a little experiment to test my grasp of it. This involved taking a pointed function to calculate the factorial of a number and converting it to a point-free form. I managed to do it, but the point free result is a lot less readable than the pointed result.
-- pointed
fact 0 = 1
fact n = n * (fact (n-1))
-- point free
fact' = foldr1 (*) . takeWhile ((<) 0) . iterate (flip (-) 1)
Am I missing something essential to point free notation or is this as readable as certain transformations get? To me it seems that a big part of the fact function is the pattern match on zero, and indeed, pattern matching is one of the biggest reasons I love Haskell. However point free notation seems to completely disallow that, along with some other things that are extremely useful like list comprehensions.

The canonical factorial in pointfree form is:
fact = product . enumFromTo 1
(which is equivalent to fact n = product [1..n])
I find this to be pretty readable. However, I would concur that the original version:
fact 0 = 1
fact n = n * (fact (n-1))
Matches the definition very well and is also readable.
The point (ha!) of pointfree form is to make it easy to reason about functions as the composition of other functions. However, the factorial function isn't really an excellent candidate for this kind of reasoning.
The decision is yours, obviously.

For each algebraic union data type there should exist its type case discriminator function which encapsulates the pattern matching for that type. We already have
either :: (a -> c) -> (b -> c) -> Either a b -> c
maybe :: b -> (a -> b) -> Maybe a -> b
Similarly there must be such function for numbers,
num :: (Num a) => b -> (a -> b) -> a -> b
num z nz 0 = z
num z nz x = nz x
so we can write
import Control.Applicative
import Data.Function
fact :: (Num a) => a -> a
fact x = num 1 (\x-> (*) (fact (pred x)) x) x
= num 1 ((*) =<< (fact.pred)) x
i.e.
fact = (num 1 . ((*) =<<) . (. pred)) fact
= fix (num 1 . ((*) =<<) . (. pred))

Related

How to implement Factorial via Control.Arrow.loop?

I wonder whether it is possible to implement factorial using Control.Arrow.loop.
loop :: ArrowLoop a => a (b, d) (c, d) -> a b c
One of the evident ideas is to implement a somehow terminating branch (a branch where the first element of the pair (type c) wouldn't depend on the second element of the pair (type d)).
It seems to me that it can't be done since we can't apply any boolean function to the second element of the pair (type d) during the first iteration because it would cause infinite recursion, so it only leaves us with the argument (type b), but the result of any boolean function wouldn't differ depending on the iteration (the argument doesn't change), thus, it would either terminate instantly or never terminate at all.
The other idea I had is to create an endless stream of factorials, but this doesn't seem real either, since, once again, the argument can't be changed.
So, I have 3 questions:
Am I right about the points above?
Am I missing any other concept which would help to implement factorial via Control.Arrow.loop?
What is the correct idea behind this implementation?
I've never actually used ArrowLoop before, loop is pretty cool.
Here is a factorial implemented using loop:
fact :: Integer -> Integer
fact =
loop $ \(n, f) ->
( f n 1
, \i acc ->
if i > 0
then f (i - 1) (i * acc)
else acc)
Let's give it a try:
λ> fact <$> [1..11]
[1,2,6,24,120,720,5040,40320,362880,3628800,39916800]
I don't know if I can answer the first question you have, but for the 3rd one it's obviously possible. For the concepts that could help you, I think the fix point is the one you are looking for. For example you can start by trying this ;)
λ> import Data.Function
λ> fix error
Once you press enough Ctrl+C you can write factorial using fix point:
λ> let fact = fix $ \ f i -> if i > 1 then i * f (i - 1) else i
λ> fact <$> [1..11]
[1,2,6,24,120,720,5040,40320,362880,3628800,39916800]
Edit
It seems like a bit of expansion on the answer could be helpful.
First of all let's look at an alternative and better (due to tail recursion) implementation of fact using fix, so we can see how it compares with our implementation using loop:
factFix :: Integer -> Integer
factFix n =
fix
(\f ->
\i acc ->
if i > 0
then f (i - 1) (i * acc)
else acc)
n
1
We can see it is not far off. In both cases we get f as an argument and we return back a function that uses that f, in fact, the returned non-recursive function is identical in both cases. Just for clarity let's extract it an reuse in both places:
factNoRec :: (Ord p, Num p) => (p -> p -> p) -> p -> p -> p
factNoRec f i acc =
if i > 0
then f (i - 1) (i * acc)
else acc
factLoop :: Integer -> Integer
factLoop n = loop (\(k, f) -> (f k 1, factNoRec f)) n
factFix :: Integer -> Integer
factFix n = fix (\f -> factNoRec f) n 1
Hopefully now it is much more apparent that they are really related concepts.
Looking into implementations of fix and loop (at least for functions, cause there are also mfix and loop for Kleisli) provides even more insight into their relation:
λ> fix f = let x = f x in x
λ> loop f b = let (c,d) = f (b,d) in c
They are really close to each other.
How about type signatures:
λ> :t fix
fix :: (t -> t) -> t
λ> :t loop
loop :: ((b, d) -> (c, d)) -> b -> c
Those look different. But if you do a bit of unification in the fact case you'll see that fix and loop acquire types:
λ> :t fix :: ((a -> b -> c) -> (a -> b -> c)) -> a -> b -> c
λ> :t loop :: ((b, a -> b -> c) -> (c, a -> b -> c)) -> b -> c
All of a b and c become all Integer in the end, but looking at type variables instead gives a better insight into what's going on. And really what's going on is just recursion by the means of fixed point combinators.

How to define the fibonacci sequence using a fold for natural numbers?

I am currently learning folds in the sense of structural recursion/catamorphisms. I implemented power and factorial using a fold for natural numbers. Please note that I barely know Haskell, so the code is probably awkward:
foldNat zero succ = go
where
go n = if (n <= 0) then zero else succ (go (n - 1))
pow n = foldNat 1 (n*)
fact n = foldNat 1 (n*) n
Next I wanted to adapt the fibonacci sequence:
fib n = go n (0,1)
where
go !n (!a, !b) | n==0 = a
| otherwise = go (n-1) (b, a+b)
With fib I have a pair as second argument whose fields are swapped at each recursive call. I am stuck at this point, because I don't understand the mechanics of the conversion process.
[EDIT]
As noted in the comments my fact function is wrong. Here is a new implementation based on a paramorphism (hopefully):
paraNat zero succ = go
where
go n = if (n <= 0) then zero else succ (go (n - 1), n)
fact = paraNat 1 (\(r, n) -> n * r)
Let the types guide you. Here is your foldNat, but with a type signature:
import Numeric.Natural
foldNat :: b -> (b -> b) -> Natural -> b
foldNat zero succ = go
where
go n = if (n <= 0) then zero else succ (go (n - 1))
Having another look at the go helper in your implementation of fib, we can note the recursive case takes and returns a (Natural, Natural) pair. Comparing that with the successor argument to foldNat suggests we want b to be (Natural, Natural). That is a nice hint on how the pieces of go should fit:
fibAux = foldNat (0, 1) (\(a, b) -> (b, a + b))
(I am ignoring the matter of strictness for now, but I will get back to that.)
This is not quite fib yet, as can be seen by looking at the result type. Fixing that, though, is no problem, as Robin Zigmond notes:
fib :: Natural -> Natural
fib = fst . foldNat (0, 1) (\(a, b) -> (b, a + b))
At this point, you might want to work backwards and substitute the definition of foldNat to picture how this corresponds to an explicitly recursive solution.
While this is a perfectly good implementation of fib, there is one major difference between it and the one you had written: this one is a lazy right fold (as is the norm for Haskell catamorphisms), while yours was clearly meant as a strict left fold. (And yes, it does make sense to use a strict left fold here: in general, if what you are doing looks like arithmetic, you ideally want strict left, while if it looks like building a data structure, you want lazy right). The good news, though, is that we can use catamorphisms to define pretty much anything that consumes a value recursively... including strict left folds! Here I will use an adapted version of the foldl-from-foldr trick (see this question for a detailed explanation of that in the case of lists), which relies on a function like this:
lise :: (b -> b) -> ((b -> b) -> (b -> b))
lise suc = \g -> \n -> g (suc n)
The idea is that we take advantage of function composition (\n -> g (suc n) is the same as g . suc) to do things in the opposite order -- it is as if we swapped succ and go in the right hand side of your definition of go. lise suc can be used as the successor argument to foldNat. That means we will get a b -> b function in the end rather than a b, but that is not a problem because we can apply it to the zero value ourselves.
Since we want a strict left fold, we have to sneak in a ($!) to make sure suc n is eagerly evaluated:
lise' :: (b -> b) -> ((b -> b) -> (b -> b))
lise' suc = \g -> \n -> g $! suc n
Now we can define a strict left fold (it is to foldNat what foldl' from Data.List is to foldr):
foldNatL' :: b -> (b -> b) -> Natural -> b
foldNatL' zero suc n = foldNat id (lise' suc) n zero
There is a final, important detail to deal with: making the fold strict is of little use if we are lazily building a pair along the way, as the pair components will remain being built lazily. We could deal with that by using ($!) along with (,) for building the pair in the successor function. However, I believe it is nicer to use a strict pair type instead so that we don't have to worry with that:
data SP a b = SP !a !b
deriving (Eq, Ord, Show)
fstSP :: SP a b -> a
fstSP (SP a _) = a
sndSP :: SP a b -> b
sndSP (SP _ b) = b
The ! mark the fields as strict (note that you don't need to enable BangPatterns to use them).
With everything in place, we can at last have fib as a strict left fold:
fib' :: Natural -> Natural
fib' = fstSP . foldNatL' (SP 0 1) (\(SP a b) -> SP b (a + b))
P.S.: As amalloy notes, your fac calculates n^n rather than n!. That is probably a matter better left for a separate question; in any case, the gist of it is that factorial is more naturally expressed as a paramorphism on naturals, rather than as a plain catamorphism. (For more on that, see, for instance, the Practical Recursion Schemes blog post by Jared Tobin, more specifically the section about paramorphisms.)

Why can't I implement these functions with a Functor/Applicative constraint only?

Consider the following functions, taken from the answers to this problem set:
func6 :: Monad f => f Integer -> f (Integer,Integer)
func6 xs = do
x <- xs
return $ if x > 0 then (x, 0)
else (0, x)
func6' :: Functor f => f Integer -> f (Integer,Integer)
-- slightly unorthodox idiom, with an partially applied fmap
func6' = fmap $ \x -> if x > 0 then (x,0) else (0,x)
-- func7 cannot be implemented without Monad if we care about the precise
-- evaluation and layzness behaviour:
-- > isJust (func7 (Just undefined))
-- *** Exception: Prelude.undefined
--
-- If we care not, then it is equivalent to func6, and there we can. Note that
-- > isJust (func6 (Just undefined))
-- True
func7 :: Monad f => f Integer -> f (Integer,Integer)
func7 xs = do
x <- xs
if x > 0 then return (x, 0)
else return (0, x)
-- func9 cannot be implemented without Monad: The structure of the computation
-- depends on the result of the first argument.
func9 :: Monad f => f Integer -> f Integer -> f Integer -> f Integer
func9 xs ys zs = xs >>= \x -> if even x then ys else zs
Although I understand the counterexample for func7, I don't understand the given reasoning for why we can implement func7 and func9 using monads only. How do monad/applicative/functor laws fit with the above reasoning?
I don't think typeclass laws are what you need to be worrying about here; in fact, I think the typeclasses unnnecessarily complicate the exercise, if your purpose is to understand nonstrictness.
Here's a simpler example where everything is monomorphic, and rather than give examples using bottom, we're going to use :sprint in GHCi to watch the extent of the evaluation.
func6
My x6 example here corresponds to func6 in the question.
λ> x6 = Just . bool 'a' 'b' =<< Just True
Initially, nothing has been evaluated.
λ> :sprint x6
x6 = _
Now we evaluate 'isJust x6'.
λ> isJust x6
True
And now we can see that x6 has been partially evaluated. Only to its head, though.
λ> :sprint x6
y = Just _
Why? Because there was no need to know the result of the bool 'a' 'b' part just to determine whether the Maybe was going to be a Just. So it remains an unevaluated thunk.
func7
My x7 example here corresponds to func7 in the question.
λ> x7 = bool (Just 'a') (Just 'b') =<< Just True
x :: Maybe Char
Again, initially nothing is evaluated.
λ> :sprint x7
x = _
And again we'll apply isJust.
λ> isJust x7
True
In this case, the content of the Just did get evaluated (so we say this definition was "more strict" or "not as lazy").
λ> :sprint x7
x = Just 'b'
Why? Because we had to evaluate the bool application before we could tell whether it was going to produce a Just result.
Chris Martin's answer covers func6 versus func7 very well. (In short, the difference is that, thanks to laziness, func6 #Maybe can decide whether the constructor used for the result should be Just or Nothing without actually having to look at any value within its argument.)
As for func9, what makes Monad necessary is that the function involves using values found in xs to decide on the functorial context of the result. (Synonyms for "functorial context" in this setting include "effects" and, as the solution you quote puts it, "structure of the computation".) For the sake of illustration, consider:
func9 (fmap read getLine) (putStrLn "Even!") (putStrLn "Odd!")
It is useful to compare the types of fmap, (<*>) and (>>=):
(<$>) :: Functor f => (a -> b) -> (f a -> f b) -- (<$>) = fmap
(<*>) :: Applicative f => f (a -> b) -> (f a -> f b)
(=<<) :: Monad f => (a -> f b) -> (f a -> f b) -- (=<<) = filp (>>=)
The a -> b function passed to fmap has no information about f, the involved Functor, and so fmap cannot change the effects at all. (<*>) can change the effects, but only by combining the effects of its two arguments -- the a -> b functions that might be found in the f (a -> b) argument have no bearing on that whatsoever. With (>>=), though, the a -> f b function is used precisely to generate effects from values found in the f a argument.
I suggest Difference between Monad and Applicative in Haskell as further reading on what you gain (and lose) when moving between Functor, Applicative and Monad.

What else can `loeb` function be used for?

I am trying to understand "Löb and möb: strange loops in Haskell", but right now the meaning is sleaping away from me, I just don't see why it could be useful. Just to recall function loeb is defined as
loeb :: Functor f => f (f a -> a) -> f a
loeb x = go where go = fmap ($ go) x
or equivalently:
loeb x = go
where go = fmap (\z -> z go) x
In the article there is an example with [] functor and spreadsheets implementation, but it is bit foreign for me just as spreadsheets themselves (never used them).
While I'm understanding that spreadsheet thing, I think it would help a lot for me and others to have more examples, despite lists. Is there any application for loeb for Maybe or other functors?
The primary source (I think) for loeb is Dan Piponi's blog, A Neighborhood of Infinity. There he explains the whole concept in greater detail. I'll replicate a little bit of that as an answer and add some examples.
loeb implements a strange kind of lazy recursion
loeb :: Functor a => a (a x -> x) -> a x
loeb x = fmap (\a -> a (loeb x)) x
Let's imagine we have a type a, where Functor a, and an a-algebra (a function of type a x -> x). You might think of this as a way of computing a value from a structure of values. For instance, here are a few []-algebras:
length :: [Int] -> Int
(!! 3) :: [a] -> a
const 3 :: Num a => [a] -> a
\l -> l !! 2 + l !! 3 :: Num a => [a] -> a
We can see that these a-algebras can use both values stored in the Functor and the structure of the Functor itself.
Another way to think of d :: a x -> x is as a value of x which requires some context–a whole Functorized value a x–in order to be computed. Perhaps this interpretation is more clearly written as Reader (a x) x, emphasizing that this is just a value of x which is delayed, awaiting the a x context to be produced.
type Delay q x = q -> x
Using these ideas we can describe loeb as follows. We're given a f-structure containing some Delayed values, where f is a Functor
Functor f, f (Delay q x)
Naturally, if we were given a q then we could convert this into a not delayed form. In fact, there's only one (non-cheating) function that does this polymorphically:
force :: Functor f => f (Delay q x) -> q -> f x
force f q = fmap ($ q) f
What loeb does is handle the extra tricky case where q is actually force f q, the very result of this function. If you're familiar with fix, this is exactly how we can produce this result.
loeb :: Functor a => a (Delay (a x) x) -> a x
loeb f = fix (force f)
So to make an example, we simply must build a structure containing Delayed values. One natural example of this is to use the list examples from before
> loeb [ length :: [Int] -> Int
, const 3 :: [Int] -> Int
, const 5 :: [Int] -> Int
, (!! 2) :: [Int] -> Int
, (\l -> l !! 2 + l !! 3) :: [Int] -> Int
]
[5, 3, 5, 5, 10]
Here we can see that the list is full of values delayed waiting on the result of evaluating the list. This computation can proceed exactly because there are no loops in data dependency, so the whole thing can just be determined lazily. For instance, const 3 and const 5 are both immediately available as values. length requires that we know the length of the list but none of the values contained so it also proceeds immediately on our fixed-length list. The interesting ones are the values delayed waiting on other values from inside our result list, but since (!! 2) only ends up depending on the third value of the result list, which is determined by const 5 and thus can be immediately available, the computation moves forward. The same idea happens with (\l -> l !! 2 + l !! 3).
So there you have it: loeb completes this strange kind of delayed value recursion. We can use it on any kind of Functor, though. All we need to do is to think of some useful Delayed values.
Chris Kuklewicz's comment notes that there's not a lot you could do interestingly with Maybe as your functor. That's because all of the delayed values over Maybe take the form
maybe (default :: a) (f :: a -> a) :: Maybe a -> a
and all of the interesting values of Maybe (Delay (Maybe a) a) ought to be Just (maybe default f) since loeb Nothing = Nothing. So at the end of the day, the default value never even gets used---we always just have that
loeb (Just (maybe default f)) == fix f
so we may as well write that directly.
You can use it for dynamic programming. The example that comes to mind is the Smith-Waterman algorithm.
import Data.Array
import Data.List
import Control.Monad
data Base = T | C | A | G deriving (Eq,Show)
data Diff = Sub Base Base | Id Base | Del Base | Ins Base deriving (Eq,Show)
loeb x = let go = fmap ($ go) x in go
s a b = if a == b then 1 else 0
smithWaterman a' b' = let
[al,bl] = map length [a',b']
[a,b] = zipWith (\l s -> array (1,s) $ zip [1..] l) [a',b'] [al,bl]
h = loeb $ array ((0,0),(al,bl)) $
[((x,0),const 0) | x <- [0 .. al]] ++
[((0,y),const 0) | y <- [1 .. bl]] ++
[((x,y),\h' -> maximum [
0,
(h' ! (x - 1,y - 1)) + s (a ! x) (b ! y),
(h' ! (x - 1, y)) + 1,
(h' ! (x, y - 1)) + 1
]
) | x <- [1 .. al], y <- [1 .. bl]]
ml l (0,0) = l
ml l (x,0) = ml (Del (a ! x): l) (x - 1, 0)
ml l (0,y) = ml (Ins (b ! y): l) (0, y - 1)
ml l (x,y) = let
(p,e) = maximumBy ((`ap` snd) . (. fst) . (const .) . (. (h !)) . compare . (h !) . fst) [
((x - 1,y),Del (a ! x)),
((y, x - 1),Ins (b ! y)),
((y - 1, x - 1),if a ! x == b ! y then Id (a ! x) else Sub (a ! x) (b ! y))
]
in ml (e : l) p
in ml [] (al,bl)
Here is a live example where it is used for: Map String Float
http://tryplayg.herokuapp.com/try/spreadsheet.hs/edit
with loop detection and loop resolution.
This program calculates speed, time and space. Each one depends on the other two. Each cell has two values: his current entered value and the expression as a function of the other cell values/expressions. circularity is permitted.
The Cell recalculation code uses the famous loeb expression by Dan Piponi in the 2006. Until now by my knowledge there haven't been any materialization of this formula on a real working spreadsheet. this one is close to it. Since loeb enters in a infinite loop when circular expressions are used, the program counts the loops and reduces complexity by progressively substituting formulas by cell values until the expression has no loops
This program is configured for immediate recalculation on cell change, but that can be adapted to allow the modification of more than one cell before recalculation by triggering it by means of a button.
This is blog pos:
http://haskell-web.blogspot.com.es/2014/09/spreadsheet-like-program-in-browser.html

Transforming a function that computes a fixed point

I have a function which computes a fixed point in terms of iterate:
equivalenceClosure :: (Ord a) => Relation a -> Relation a
equivalenceClosure = fst . List.head -- "guaranteed" to exist
. List.dropWhile (uncurry (/=)) -- removes pairs that are not equal
. U.List.pairwise (,) -- applies (,) to adjacent list elements
. iterate ( reflexivity
. symmetry
. transitivity
)
Notice that we can abstract from this to:
findFixedPoint :: (a -> a) -> a -> a
findFixedPoint f = fst . List.head
. List.dropWhile (uncurry (/=)) -- dropWhile we have not reached the fixed point
. U.List.pairwise (,) -- applies (,) to adjacent list elements
. iterate
$ f
Can this function be written in terms of fix? It seems like there should be a transformation from this scheme to something with fix in it, but I don't see it.
There's quite a bit going on here, from the mechanics of lazy evaluation, to the definition of a fixed point to the method of finding a fixed point. In short, I believe you may be incorrectly interchanging the fixed point of function application in the lambda calculus with your needs.
It may be helpful to note that your implementation of finding the fixed-point (utilizing iterate) requires a starting value for the sequence of function application. Contrast this to the fix function, which requires no such starting value (As a heads up, the types give this away already: findFixedPoint is of type (a -> a) -> a -> a, whereas fix has type (a -> a) -> a). This is inherently because the two functions do subtly different things.
Let's dig into this a little deeper. First, I should say that you may need to give a little bit more information (your implementation of pairwise, for example), but with a naive first-try, and my (possibly flawed) implementation of what I believe you want out of pairwise, your findFixedPoint function is equivalent in result to fix, for a certain class of functions only
Let's take a look at some code:
{-# LANGUAGE RankNTypes #-}
import Control.Monad.Fix
import qualified Data.List as List
findFixedPoint :: forall a. Eq a => (a -> a) -> a -> a
findFixedPoint f = fst . List.head
. List.dropWhile (uncurry (/=)) -- dropWhile we have not reached the fixed point
. pairwise (,) -- applies (,) to adjacent list elements
. iterate f
pairwise :: (a -> a -> b) -> [a] -> [b]
pairwise f [] = []
pairwise f (x:[]) = []
pairwise f (x:(xs:xss)) = f x xs:pairwise f xss
contrast this to the definition of fix:
fix :: (a -> a) -> a
fix f = let x = f x in x
and you'll notice that we're finding a very different kind of fixed-point (i.e. we abuse lazy evaluation to generate a fixed point for function application in the mathematical sense, where we only stop evaluation iff* the resulting function, applied to itself, evaluates to the same function).
For illustration, let's define a few functions:
lambdaA = const 3
lambdaB = (*)3
and let's see the difference between fix and findFixedPoint:
*Main> fix lambdaA -- evaluates to const 3 (const 3) = const 3
-- fixed point after one iteration
3
*Main> findFixedPoint lambdaA 0 -- evaluates to [const 3 0, const 3 (const 3 0), ... thunks]
-- followed by grabbing the head.
3
*Main> fix lambdaB -- does not stop evaluating
^CInterrupted.
*Main> findFixedPoint lambdaB 0 -- evaluates to [0, 0, ...thunks]
-- followed by grabbing the head
0
now if we can't specify the starting value, what is fix used for? It turns out that by adding fix to the lambda calculus, we gain the ability to specify the evaluation of recursive functions. Consider fact' = \rec n -> if n == 0 then 1 else n * rec (n-1), we can compute the fixed point of fact' as:
*Main> (fix fact') 5
120
where in evaluating (fix fact') repeatedly applies fact' itself until we reach the same function, which we then call with the value 5. We can see this in:
fix fact'
= fact' (fix fact')
= (\rec n -> if n == 0 then 1 else n * rec (n-1)) (fix fact')
= \n -> if n == 0 then 1 else n * fix fact' (n-1)
= \n -> if n == 0 then 1 else n * fact' (fix fact') (n-1)
= \n -> if n == 0 then 1
else n * (\rec n' -> if n' == 0 then 1 else n' * rec (n'-1)) (fix fact') (n-1)
= \n -> if n == 0 then 1
else n * (if n-1 == 0 then 1 else (n-1) * fix fact' (n-2))
= \n -> if n == 0 then 1
else n * (if n-1 == 0 then 1
else (n-1) * (if n-2 == 0 then 1
else (n-2) * fix fact' (n-3)))
= ...
So what does all this mean? depending on the function you're dealing with, you won't necessarily be able to use fix to compute the kind of fixed point you want. This is, to my knowledge, dependent on the function(s) in question. Not all functions have the kind of fixed point computed by fix!
*I've avoided talking about domain theory, as I believe it would only confuse an already subtle topic. If you're curious, fix finds a certain kind of fixed point, namely the least available fixed point of the poset the function is specified over.
Just for the record, it is possible to define the function findFixedPoint using fix.
As Raeez has pointed out, recursive functions can be defined in terms of fix.
The function that you are interested in can be recursively defined as:
findFixedPoint :: Eq a => (a -> a) -> a -> a
findFixedPoint f x =
case (f x) == x of
True -> x
False -> findFixedPoint f (f x)
This means that we can define it as fix ffp where ffp is:
ffp :: Eq a => ((a -> a) -> a -> a) -> (a -> a) -> a -> a
ffp g f x =
case (f x) == x of
True -> x
False -> g f (f x)
For a concrete example, let us assume that f is defined as
f = drop 1
It is easy to see that for every finite list l we have findFixedPoint f l == [].
Here is how fix ffp would work when the "value argument" is []:
(fix ffp) f []
= { definition of fix }
ffp (fix ffp) f []
= { f [] = [] and definition of ffp }
[]
On the other hand, if the "value argument" is [42], we would have:
fix ffp f [42]
= { definition of fix }
ffp (fix ffp) f [42]
= { f [42] =/= [42] and definition of ffp }
(fix ffp) f (f [42])
= { f [42] = [] }
(fix ffp) f []
= { see above }
[]

Resources