Hello everyone I'm trying to implement the higher-order function fix, which computes an attractive fixed point of an arbitrary function f :: a -> a from an initial point x. That is, a fixed point of the form fᴷ(x) for a given f and x.
-- CONTRACT
fix :: Eq a => (a -> a) -> a -> a
-- DEFINITION [TODO: Implement fix]
fix f x = ?
My current attempt is:
fix f x | f x == x = x
| otherwise = fix f x
where x = f x
Note: Your
function will not terminate if the function does not converge from the starting
point.
can someone help me please ? I tried but it didn't return anything
A common misconception is that when you write x = ..., you assign a value in Haskell. In Haskell one does not assign a value, one declares one.
This thus means that basically you constructed a variable x in the where clause that is not the x in the head of the function, so something like:
fix :: Eq a => (a -> a) -> a -> a
fix f _ | f x == x = x
| otherwise = fix f x
where x = f x
Here you thus defined x in terms of itself: x = f x, so that means if Haskell aims to evaluate that, it will start calculating f(f(f(f(f(f(...)))))), but without any checks if the fixed point has been reached.
The solution is thus to introduce a new variable, for example x2, and thus use this like:
fix :: Eq a => (a -> a) -> a -> a
fix f x | x == x2 = x
| otherwise = fix f x2
where x2 = f x
So here x2 is the next x. Given x == x2, we return x (or x2), if not, we calculate the fixed point of f and x2, so we advanced one step in the "Quest for the fixed point".
Give a different name to the next step of iteration, like this:
where x' = f x
(instead of where x = f x). Now review the rest of your existing code, and for each occurrence of x, ask yourself: did I mean x here, or x'?
You already have answers on how to writefix from scratch. If you'd like to try it using some standard Haskell functions, I suggest you look at the function until.
until :: (a -> Bool) -> (a -> a) -> a -> a
Note the type of until is rather similar to the one you want. It just takes one extra argument of the form a -> Bool. The expression until p f x iteratively applies f starting with the initial point x, until some condition p is satisfied. And you should easily be able to write fix in the form,
fix = until p
for some function p :: a -> Bool. Now you just need to implement this stopping condition p, which checks if a point y you've calculated is a fixed point of f, that is if f y == y.
Related
Can you explain how exactly mapAccumR works, the kind of problems it solves,and how it's different from foldr. I have a hard time picturing how it works.
This is a good question. I wish the documentation was a bit nicer around this. I recently had a use for them myself, so hopefully I can explain from the perspective of someone who also had some trouble understanding how they worked.
So, the type signature of mapAccumR is:
Traversable t => (a -> b -> (a, c)) -> a -> t b -> (a, t c)
Let's just assume the Traversable under question is a list because it's possibly a bit easier to understand that way, so specialising the types:
(a -> b -> (a, c)) -> a -> [b] -> (a, [c])
So, to explain this, mapAccumR is a function of three arguments (ignoring currying, as we do for easy explanation), and I'm going to annotate these arguments here:
mapAccumR :: (a -> b -> (a, c)) -> a -> [b] -> (a, [c])
mapAccumR :: mappingAndAccumulationFunction -> initialAccumulatorValue -> listToMapOver -> resultantAccumulatorAndMappedListPair
Cool, so that clears things a little bit, but it's still a bit confusing, right. So what the heck does it do?
Well, it does an accumulating map: so let's say in the first step, what it does is take the initialAccumulatorValue and the first b from the listToMapOver, and passes those to the mappingAndAccumulationFunction function, which will do something with them and return two things: 1. a new value of type a and 2. a mapped value for later collection into the mapped list (see the type of resultantAccumulatorAndMappedListPair). These two values are paired, hence the return type of the mappingAndAccumulationFunction function as (a, c).
In the second and subsequent steps, it takes this (a, c) pair from the last step, pulls the c out and remembers it by appending it to an internal list it's keeping track of until the end, and pulls the a out as the first argument to the next application of the mappingAndAccumulationFunction along with the next b value of the listToMapOver.
Once it runs out of b values from listToMapOver, it returns a pair which has the last value of a and a list whose contents are of type c.
So why the heck would you want this function? Example time!
annotateLeastFavourites items = snd (mapAccumR (\num item -> (num + 1, show num ++ ": " ++ item)) 1 items)
itemList = ["Geese","Monkeys","Chocolate","Chips"]
> annotateLeastFavourites itemList
["4: Geese","3: Monkeys","2: Chocolate","1: Chips"]
or, maybe this is a bit simpler to see what's going on:
> mapAccumR (\num item -> (num + 1, show num ++ ": " ++ item)) 1 ["Geese", "Monkeys", "Chocolate", "Chips"]
(5,["4: Geese","3: Monkeys","2: Chocolate","1: Chips"])
So we can see that it's a function that can give us a "cumulative value" along with our accumulating value anytime we need some information to pass along a map, for example, or if we want to build up a collection value (on the right) that also needs to have information passed along that changes with each step (the value on the left).
Maybe you want to get the max length of a list of items as you also annotate them with each item's length
> mapAccumR (\biggestSoFar item -> (max biggestSoFar (length item), (item, length item))) 0 ["Geese", "Monkeys", "Chocolate", "Chips"]
(9,[("Geese",5),("Monkeys",7),("Chocolate",9),("Chips",5)])
There are lots of possibilities here. Hopefully now it's clear why people say this is like a combination of map and foldr. If you happen to think geometrically as I do, I think of it as when you need to transform a collection of some kind, and you need to thread some changing thing through that collection as part of the transformation.
Hope this has helped give you an intuition and store the pattern in your mind for later when you recognise you might need it in the future :)
let (_, result) =
mapAccumR
(\cumulativeLength item ->
let newLength = cumulativeLength + length item
in (newLength, take cumulativeLength (repeat ' ') ++ item)
)
0
["Geese", "Monkeys", "Chocolate", "Chips", "Dust", "Box"]
in mapM_ putStrLn $ reverse result
Box
Dust
Chips
Chocolate
Monkeys
Geese
Sometimes, and depending on the shape of the computation you want, you'd want to use mapAccumL instead of mapAccumR, but you get the picture.
Also, note that it's defined for Traversable instances, not just lists, so it will work on all sorts of traversable containers and data structures such as Trees, Maps, Vectors, etc.
Here are some examples, generated using Debug.SimpleReflect.
Below, f is the same f you would use in a foldr, except the arguments have been flipped. Otherwise, there's no difference.
Instead g is similar to what you would use in a map, except g x y does not only depend on the current list element y, but also on the results of the former fold x.
> import Data.List
> import Debug.SimpleReflect
> mapAccumR (\x y -> (f x y, g x y)) a [] :: (Expr, [Expr])
(a,[])
> mapAccumR (\x y -> (f x y, g x y)) a [b] :: (Expr, [Expr])
(f a b,[g a b])
> mapAccumR (\x y -> (f x y, g x y)) a [b,c] :: (Expr, [Expr])
(f (f a c) b,[g (f a c) b,g a c])
> mapAccumR (\x y -> (f x y, g x y)) a [b,c,d] :: (Expr, [Expr])
(f (f (f a d) c) b,[g (f (f a d) c) b,g (f a d) c,g a d])
Here is a foldr with f having its arguments flipped, by comparison.
> foldr (\x y -> f y x) a [b,c,d]
f (f (f a d) c) b
(I have no idea about why mapAccumR chose the arguments of f in the flipped order compared to foldr.)
I'm making a calculator on abstract integers and I'm doing an awful lot of pattern matching. I can write
add Zero x = x
add (P x) y = next $ add (prev $ P x) y
add (N x) y = prev $ add (next $ N x) y
or
add Zero x = x
add x y = case x of
P _ -> next $ add (prev x) y
_ -> prev $ add (next x) y
While the first way is shorter, something in the second way appeals to me more.
Which is the preferred way to do this?
Use as-patterns.
add Zero y = y
add x#(P _) y = next $ add (prev x) y
add x#(N _) y = prev $ add (next x) y
I'd also consider abstracting out the common structure of your two recursive branches by noting that you just swap the roles of the prev and next functions depending on whether x is positive or negative:
add Zero x = x
add x y = f $ add (g x) y
where (f, g) = case x of
P _ -> (next, prev)
N _ -> (prev, next)
About this style:
add Zero x = x
add x y = case x of
P _ -> next $ add (prev x) y
_ -> prev $ add (next x) y
On the positive side, it avoids some repetition, which is good.
On the negative side, the case looks to be non-exhaustive at a first sight. Indeed, to convince oneself that the pattern match is really exhaustive, we have to reason about the possible values for the x in case x of, and see that at runtime that can not be Zero, because that was handled above. This requires far more mental effort than the first snippet, which is obviously exhaustive.
Worse, when turning on warnings, as we should always do, GHC complains since it is not convinced that the case is exhaustive.
Personally, I wish the designers of Haskell had forbidden non exhaustive matches entirely. I'd use a -Werror-on-non-exhaustive-matches if there were one. I would like to be forced to write e.g.
case something of
A -> ...
B -> ...
_ -> error "the impossible happened"
than having the last branch being silently inserted by the compiler for me.
Consider using the math-style definition of integers as congruence classes of pairs of naturals under the equivalence relation:
{((a,b), (c,d)) | b+c == d+a}
The intuition is that the pair of naturals (a,b) represents b-a. As mentioned in the Wikipedia article, this often reduces the number of special cases compared to the "0/positive/negative" definition. In particular, the addition operation you ask about implementing becomes a one-liner:
-- both Int and Integer are taken
data Int' = Int Nat Nat
instance Num Int' where
-- b-a + d-c = (b+d)-(a+c)
Int a b + Int c d = Int (a + c) (b + d)
It's kind of fun to work through the different operations with this representation. For example, Eq can be implemented with the equation given above, and Ord is similar:
instance Eq Int' where
-- b-a == d-c = b+c == d+a
Int a b == Int c d = b+c == d+a
instance Ord Int' where
-- compare (b-a) (d-c) = compare (b+c) (d+a)
compare (Int a b) (Int c d) = compare (b+c) (d+a)
On occasion, it can be handy to normalize these things. Just like fractions can be reduced by multiplying the numerator and denominator by the same number until they're relatively prime, these things can be reduced by adding or subtracting the same number to both parts until (at least) one of them is zero.
normalize (Int (S a) (S b)) = normalize (Int a b)
normalize v = v
f x zero = Nothing
f x y = Just $ x / y
where zero = 0
The literal-bound identifier zero simply matches all after the warning Pattern match(es) are overlapped.
That's how Haskell's syntax works; every lowercase-initial variable name in a pattern (re)binds that name. Any existing binding will be shadowed.
But even if that weren't the case, the binding for zero would not be visible to the first alternative, because of how Haskell's syntax works. A similar thing happens in the following version:
f = \v1 v2 -> case (v1, v2) of
(x, zero) -> Nothing
(x, y) -> Just $ x / y
where zero = 0
The where clause only applies to the one alternative that it's part of, not to the whole list of alternatives. That code is pretty much the same thing as
f = \v1 v2 -> case (v1, v2) of
(x, zero) -> Nothing
(x, y) -> let zero = 0 in Just $ x / y
If bound identifiers had different semantics than unbound identifiers in a pattern match, that could be quite error prone as binding a new identifier could mess up pattern matches anywhere that identifier is in scope.
For example let's say you're importing some module Foo (unqualified). And now the module Foo is changed to add the binding x = 42 for some reason. Now in your pattern match you'd suddenly be comparing the first argument against 42 rather than binding it to x. That's a pretty hard to find bug.
So to avoid this kind of scenario, identifier patterns have the same semantics regardless of whether they're already bound somewhere.
Because they are very fragile. What does this compute?
f x y z = 2*x + 3*y + z
Would you expect this to be equal to
f x 3 z = 2*x + 9 + z
f _ _ _ = error "non-exhaustive patterns!"
only because there's a y = 3 defined somewhere in the same 1000+ line module?
Also consider this:
import SomeLibrary
f x y z = 2*x + 3*y + z
What if in a future release SomeLibrary defines y? We don't want that to suddenly stop working.
Finally, what if there is no Eq instance for y?
y :: a -> a
y = id
f :: a -> (a -> a) -> a
f x y = y x
f x w = w (w x)
Sure, it is a contrived example, but there's no way the runtime can compare the input function to check whether it is equal to y or not.
To disambiguate this, some new languages like Swift uses two different syntaxes. E.g. (pseudo-code)
switch someValue {
case .a(x) : ... // compare by equality using the outer x
case .b(let x) : ... // redefine x as a new local variable, shadowing the outer one
}
zero is just a variable that occurs inside a pattern, just like y does in the second line. There is no difference between the two. When a variable that occurs inside a pattern, this introduces a new variable. If there was a binding for that variable already, the new variable shadows the old one.
So you cannot use an already bound variable inside a pattern. Instead, you should do something like that:
f x y | y == zero = Nothing
where zero = 0
f x y = Just $ x / y
Notice that I also moved the where clause to bring it in scope for the first line.
I am trying to understand lambda functions (i.e. anonymous functions) in Haskell by writing a few simple functions that use them.
In the following example, I am simply trying to take in 3 parameters and add two of the three using an anonymous function and adding the third parameter after that.
I am getting an error saying that I must declare an instance first.
specialAdd x y z = (\x y -> x + y) + z
I appreciate any explanation of why my example is not working and/or any explanation that would help me better understand lambda functions.
specialAdd x y z = (\x y -> x + y) + z
In this example, what you are trying to do is add a function to a number, which is not going to work. Look at (\x y -> x + y) + z: it has the form a + b. In order for such an expression to work, the a part and the b part must be numbers of the same type.
Haskell is a bit of an unusual language, so its error messages are rarely of the form "you can't do that". So what's going on here is that Haskell sees that (\x y -> x + y) is a function, and since in an expression like a + b, b must be the same type as a, it concludes that b must also be a function. Haskell also allows you to define your own rules for adding non-built-in types; so it can't just give you an error saying "you can't add two functions," but instead the error is "you have not defined a rule that allows me to add two functions."
The following would do what you want:
specialAdd x y z = ((\x y -> x + y) x y) + z
Here you are applying the function (\x y -> x + y) to arguments x and y, then adding the result to z.
A good way to practice anonymous function is to use them with high order function as fold or map.
Using map as an entry point,
Basic definition of map,
map f [] = []
map f (x:xs) = f x : f xs
Built up an example,
>>> let list = [0..4]
>>> let f x = x + 1
Applying map we obtain,
>>> map f list
[1,2,3,4,5]
Now, we can omit the declaration of f and replace it using anonymous function,
>>> map (\x->x+1) list
[1,2,3,4,5]
then we deduce, map f list == map (\x->x+1) list, thus
f = \x-> x + 1 --- The same as f x = x + 1, but this is the equivalent lambda notation.
then starting with a simple function we see how to translate it into an anonymous function and then how an anonymous function can be rely to a lambda abstraction.
As an exercise try to translate f x = 2*x.
Now more complex, an anonymous function which take two arguments,
Again an working example,
>>> let add x y = x + y
>>> foldl' add 0 [0..4]
10
Can be rewrite using anonymous function as,
>>> foldl' (\x y -> x + y) 0 [0..4]
Again using equality we deduce that add = \x y -> x + y
Moreover as in hakell all function are function of one argument, and we can partial apply it, we can rewrite our previous anonymous function as, add = \x -> (\y -> x + y).
Then where is the trick ?? Because, I just show the use of anonymous function into high order one, and starting from that, showing how this can be exploited to rewrite function using lambda notation. I mean how can it help you to learn how to write down anonymous function ?
Simply cause I've give you (show you) an existing framework using high order function.
This framework is a huge opportunity to accommodate you with this notation.
Starting from that an infinity range of exercise can be deduce, for example try to do the following.
A - Find the corresponding anonymous function ?
1 - let f (x,y) = x + y in map f [(0,1),(2,3),(-1,1)]
2 - let f x y = x * y in foldl' f 1 [1..5]
B - Rewrite all of them using lambda notation into a declarative form (f = \x-> (\y-> ...)
And so on ....
To summarize,
A function as
(F0) f x1 x2 ... xn = {BODY of f}
can always be rewrite as,
(F1) f = \x1 x2 ... xn -> {BODY of f}
where
(F2) (\x1 x2 ... xn -> {BODY of f})
F2 form are just anonymous function, a pure translation of the function into lambda calculus form. F1 is a declarative lambda notation (because we declare f, as we define it, binding it to the anonymous F2). F0 being the usual notation of Haskeller.
A last note focusing on the fact we can put parenthesis between the argument, this create a closure. Doing that mean that a subset of the function's code can be fully evaluated using a subset of the function's argument, (mean converting to a form where no more free variable occurs), but that's another story.
Here is correct form:
specialAdd a b c = ((\x y -> x + y) a b) + c
Example from Learn You a Haskell...:
zipWith (\a b -> (a * 30 + 3) / b) [5,4,3,2,1] [1,2,3,4,5]
Great explanation:
http://learnyouahaskell.com/higher-order-functions#lambdas
From what I understand Labmbda/Anonymous functions help you declare a function "inline" without the need to give it a name. The "\" (ASCII for the Greek, λ) precedes the variable names for the expression that follows the "->". For example,
(\x y -> x + y)
is an anonymous (lambda) function similar to (+). It takes two parameters of type Num and returns their sum:
Prelude> :type (+)
(+) :: Num a => a -> a -> a
Prelude> :type (\x y -> x + y)
(\x y -> x + y) :: Num a => a -> a -> a
Your example is not working because, as others have pointed out, the right hand side of it is using a lambda function, (\x y -> x + y), as a parameter for the (+) operator, which is defined by default only for parameters of type Num. Some of the beauty of the lambda function can be in its "anonymous" use. Vladimir showed how you can use the lambda function in your declaration by passing it the variables from the left side. A more "anonymous" use could be simply calling it, with variables, without giving the function a name (hence anonymous). For example,
Prelude> (\x y z -> x + y + z) 1 2 3
6
and if you like writing parentheses:
Prelude> (((+).) . (+)) 1 2 3
6
Or in a longer expression (as in your example declaration), e.g.,
Prelude> filter (\x -> length x < 3) [[1],[1,2],[1,2,3]]
[[1],[1,2]]
You are trying to use (+) as something like (Num a) => (a -> a -> a) -> a -> ?? which is not correct.
(+) is defined in the class Num and (a -> a -> a) is not an instance of this class.
What exactly are you trying to achieve ?
Suppose for a minute that we think the following is a good idea:
data Fold x y = Fold {start :: y, step :: x -> y -> y}
fold :: Fold x y -> [x] -> y
Under this scheme, functions such as length or sum can be implemented by calling fold with the appropriate Fold object as argument.
Now, suppose you want to do clever optimisation tricks. In particular, suppose you want to write
unFold :: ([x] -> y) -> Fold x y
It should be relatively easy to rule a RULES pragma such that fold . unFold = id. But the interesting question is... can we actually implement unFold?
Obviously you can use RULES to apply arbitrary code transformations, whether or not they preserve the original meaning of the code. But can you really write an unFold implementation which actually does what its type signature suggests?
No, it's not possible. Proof: let
f :: [()] -> Bool
f[] = False
f[()] = False
f _ = True
First we must, for f' = unFold f, have start f' = False, because when folding over the empty list we directly get the start value. Then we must require step f' () False = False to achieve fold f' [()] = False. But when now evaluating fold f' [(),()], we would again only get a call step f' () False, which we had to define as False, leading to fold f' [(),()] ≡ False, whereas f[(),()] ≡ True. So there exists no unFold f that fulfills fold $ unFold f ≡ f. □
You can, but you need to make a slight modification to Fold in order to pull it off.
All functions on lists can be expressed as a fold, but sometimes to accomplish this, extra bookkeeping is needed. Suppose we add an additional type parameter to your Fold type, which passes along this additional contextual information.
data Fold a c r = Fold { _start :: (c, r), _step :: a -> (c,r) -> (c,r) }
Now we can implement fold like so
fold :: Fold a c r -> [a] -> r
fold (Fold step start) = snd . foldr step start
Now what happens when we try to go the other way?
unFold :: ([a] -> r) -> Fold a c r
Where does the c come from? Functions are opaque values, so it's hard to know how to inspect a function and know which contextual information it relies on. So, let's cheat a little. We're going to have the "contextual information" be the entire list, so then when we get to the leftmost element, we can just apply the function to the original list, ignoring the prior cumulative results.
unFold :: ([a] -> r) -> Fold a [a] r
unFold f = Fold { _start = ([], f [])
, _step = \a (c, _r) -> let c' = a:c in (c', f c') }
Now, sadly, this does not necessarily compose with fold, because it requires that c must be [a]. Let's fix that by hiding c with existential quantification.
{-# LANGUAGE ExistentialQuantification #-}
data Fold a r = forall c. Fold
{ _start :: (c,r)
, _step :: a -> (c,r) -> (c,r) }
fold :: Fold a r -> [a] -> r
fold (Fold start step) = snd . foldr step start
unFold :: ([a] -> r) -> Fold a r
unFold f = Fold start step where
start = ([], f [])
step a (c, _r) = let c' = a:c in (c', f c')
Now, it should always be true that fold . unFold = id. And, given a relaxed notion of equality for the Fold data type, you could also say that unFold . fold = id. You can even provide a smart constructor that acts like the old Fold constructor:
makeFold :: r -> (a -> r -> r) -> Fold a r
makeFold start step = Fold start' step' where
start' = ((), start)
step' a ((), r) = ((), step a r)
tl;dr:
Conclusion 1: you can't
What you asked for originally isn't possible, at least not by any version of what you wanted I can come up with. (See below.)
If change your data type to allow me to store intermediate calculations, I think I'll be fine, but even then,
the function unFold would be rather inefficient, which seems to run counter to your clever optimisation tricks agenda!
Conclusion 2: I don't think it achieves what you want, even if you work around it by changing the types
Any optimisation of the list algorithm would be subject to the problem that you've calculated the step function using the original unoptimised function, and quite probably in a complicated way.
Since there's no equality on functions, optimising step to something efficient isn't possible. I think you need a human to do unFold, not a compiler.
Anyway, back to the original question:
Could fold . unFold = id ?
No. Suppose we have
isSingleton :: [a] -> Bool
isSingleton [x] = True
isSingleton _ = False
then if we had unFold :: ([x] -> y) -> Fold x y then if foldSingleton was the same as unFold isSingleton would need to have
foldSingleton = Fold {start = False , step = ???}
Where step takes an element of the list and updates the result.
Now isSingleton "a" == True, we need
step False = True
and because isSingleton "ab" == False, we need
step True = False
so step = not would do so far, but also isSingleton "abc" == False so we also need
step False = False
Since there are functions ([x] -> y) that cannot be represented by a value of type Fold x y, there cannot exist a function unFold :: ([x] -> y) -> Fold x y such that fold . unFold = id, because id is a total function.
Edit:
It turns out you're not convinced by this, because you only expected unFold to work on functions that had a representation as a fold, so maybe you meant unFold.fold = id.
Could unFold . fold = id ?
No.
Even if you just want unFold to work on functions ([x] -> y) that can be obtained using fold :: Fold x y -> ([x] -> y), I don't think it's possible. Let's address the question by assuming now we have defined
combine :: X -> Y -> Y
initial :: Y
folded :: [X] -> Y
folded = fold $ Fold initial combine
Recovering the value initial is trivial: initial = folded [].
Recovery of the original combine is not, because there's no way to go from a function that gives you some values of Y to one which combines arbitrary values of Y.
For an example, if we had X = Y = Int and I defined
combine x y | y < 0 = -10
| otherwise = y + 1
initial = 0
then since combine just adds one to y every time you use it on positive y, and the initial value is 0, folded is indistinguishable from length in terms of its output. Notice that since folded xs is never negative, it's also impossible to define a function unFold :: ([x] -> y) -> Fold x y that ever recovers our combine function. This boils down to the fact that fold is not injective; it carries different values of type Fold x y to the same value of type [x] -> y.
Thus I've proved two things: if unFold :: ([x] -> y) -> Fold x y then both fold.unFold /= id and now also unFold.fold /= id
I bet you're not convinced by this either, because you don't really care whether you got Fold 0 (\_ y -> y+1) or Fold 0 combine back from unFold folded, seeing as they have the same value when refolded! Let's narrow the goalposts one more time. Perhaps you want unFold to work whenever the function is obtainable via fold, and you're happy for it not to give you inconsistent answers as long as when you fold the result again, you get the same function. I can summarise that with this next question:
Could fold . unFold . fold = fold ?
i.e. Could you define unFold so that fold.unFold is the identity on the set of functions obtainable via fold?
I'm really convinced this isn't possible, because it's not a tractible problem to calculate the step function without retaining extra information about intermediate values on sublists.
Suppose we had
unFold f = Fold {start = f [], step = recoverstep f}
we need
recoverstep f x1 initial == f [x1]
so if there's an Eq instance for x (ring the alarm bells!), then recoverstep must have the same effect as
recoverstep f x1 y | y == initial = f [x1]
also we need
recoverstep f x2 (f [x1]) == f [x1,x2]
so if there's an Eq instance for x, then recoverstep must have the same effect as
recoverstep f x2 y | y == (f [x1]) = f [x1,x2]
but there's a massive problem here: the variable x1 is free in the right hand side of this equation.
This means that logically, we can't tell what value the step function should have on an x unless we already
know what values it has been used on. We would need to store the values of f [x1], f [x1,x2] etc in the Fold
data type to make it work, and this is the clincher as to why we can't define unFold. If you change the data type Fold
to allow us to store information about intermediate lists, I can see it would work, but as it stands it's impossible
to recover the context.
Similar to Dan's answer, but using a slightly different approach. Instead of pairing the accumulator with partial results which will be thrown away at the end, we add a "post-processing" function which will convert from the accumulator type to the final result.
The same "cheat" for unFold just does all the work in the post-processing step:
{-# LANGUAGE ExistentialQuantification #-}
data Fold a r = forall c. Fold
{ _start :: c
, _step :: a -> c -> c
, _result :: c -> r }
fold :: Fold a r -> [a] -> r
fold (Fold start step result) = result . foldr step start
unFold :: ([a] -> r) -> Fold a r
unFold f = Fold [] (:) f
makeFold :: r -> (a -> r -> r) -> Fold a r
makeFold start step = Fold start step id