Reusing patterns in pattern guards or case expressions - haskell

My Haskell project includes an expression evaluator, which for the purposes of this question can be simplified to:
data Expression a where
I :: Int -> Expression Int
B :: Bool -> Expression Bool
Add :: Expression Int -> Expression Int -> Expression Int
Mul :: Expression Int -> Expression Int -> Expression Int
Eq :: Expression Int -> Expression Int -> Expression Bool
And :: Expression Bool -> Expression Bool -> Expression Bool
Or :: Expression Bool -> Expression Bool -> Expression Bool
If :: Expression Bool -> Expression a -> Expression a -> Expression a
-- Reduces an Expression down to the simplest representation.
reduce :: Expression a -> Expression a
-- ... implementation ...
The straightforward approach to implementing this is to write a case expression to recursively evaluate and pattern match, like so:
reduce (Add x y) = case (reduce x, reduce y) of
(I x', I y') -> I $ x' + y'
(x', y') -> Add x' y'
reduce (Mul x y) = case (reduce x, reduce y) of
(I x', I y') -> I $ x' * y'
(x', y') -> Mul x' y'
reduce (And x y) = case (reduce x, reduce y) of
(B x', B y') -> B $ x' && y'
(x', y') -> And x' y'
-- ... and similarly for other cases.
To me, that definition looks somewhat awkward, so I then rewrote the definition using pattern guards, like so:
reduce (Add x y) | I x' <- reduce x
, I y' <- reduce y
= I $ x' + y'
I think this definition looks cleaner compared to the case expression, but when defining multiple patterns for different constructors, the pattern is repeated multiple times.
reduce (Add x y) | I x' <- reduce x
, I y' <- reduce y
= I $ x' + y'
reduce (Mul x y) | I x' <- reduce x
, I y' <- reduce y
= I $ x' * y'
Noting these repeated patterns, I was hoping there would be some syntax or structure that could cut down on the repetition in the pattern matching. Is there a generally accepted method to simplify these definitions?
Edit: after reviewing the pattern guards, I've realised they don't work as a drop-in replacement here. Although they provide the same result when x and y can be reduced to I _, they do not reduce any values when the pattern guards do not match. I would still like reduce to simplify subexpressions of Add et al.

One partial solution, which I've used in a similar situation, is to extract the logic into a "lifting" function that takes a normal Haskell operation and applies it to your language's values. This abstracts over the wrappping/unwrapping and resulting error handling.
The idea is to create two typeclasses for going to and from your custom type, with appropriate error handling. Then you can use these to create a liftOp function that could look like this:
liftOp :: (Extract a, Extract b, Pack c) => (a -> b -> c) ->
(Expression a -> Expression b -> Expression c)
liftOp err op a b = case res of
Nothing -> err a' b'
Just res -> pack res
where res = do a' <- extract $ reduce' a
b' <- extract $ reduce' b
return $ a' `op` b'
Then each specific case looks like this:
Mul x y -> liftOp Mul (*) x y
Which isn't too bad: it isn't overly redundant. It encompasses the information that matters: Mul gets mapped to *, and in the error case we just apply Mul again.
You would also need instances for packing and unpacking, but these are useful anyhow. One neat trick is that these can also let you embed functions in your DSL automatically, with an instance of the form (Extract a, Pack b) => Pack (a -> b).
I'm not sure this will work exactly for your example, but I hope it gives you a good starting point. You might want to wire additional error handling through the whole thing, but the good news is that most of that gets folded into the definition of pack, unpack and liftOp, so it's still pretty centralized.
I wrote up a similar solution for a related (but somewhat different) problem. It's also a way to handle going back and forth between native Haskell values and an interpreter, but the interpreter is structured differently. Some of the same ideas should still apply though!

This answer is inspired by rampion's follow-up question, which suggests the following function:
step :: Expression a -> Expression a
step x = case x of
Add (I x) (I y) -> I $ x + y
Mul (I x) (I y) -> I $ x * y
Eq (I x) (I y) -> B $ x == y
And (B x) (B y) -> B $ x && y
Or (B x) (B y) -> B $ x || y
If (B b) x y -> if b then x else y
z -> z
step looks at a single term, and reduces it if everything needed to reduce it is present. Equiped with step, we only need a way to replace a term everywhere in the expression tree. We can start by defining a way to apply a function inside every term.
{-# LANGUAGE RankNTypes #-}
emap :: (forall a. Expression a -> Expression a) -> Expression x -> Expression x
emap f x = case x of
I a -> I a
B a -> B a
Add x y -> Add (f x) (f y)
Mul x y -> Mul (f x) (f y)
Eq x y -> Eq (f x) (f y)
And x y -> And (f x) (f y)
Or x y -> Or (f x) (f y)
If x y z -> If (f x) (f y) (f z)
Now, we need to apply a function everywhere, both to the term and everywhere inside the term. There are two basic possibilities, we could apply the function to the term before applying it inside or we could apply the function afterwards.
premap :: (forall a. Expression a -> Expression a) -> Expression x -> Expression x
premap f = emap (premap f) . f
postmap :: (forall a. Expression a -> Expression a) -> Expression x -> Expression x
postmap f = f . emap (postmap f)
This gives us two possibilities for how to use step, which I will call shorten and reduce.
shorten = premap step
reduce = postmap step
These behave a little differently. shorten removes the innermost level of terms, replacing them with literals, shortening the height of the expression tree by one. reduce completely evaluates the expression tree to a literal. Here's the result of iterating each of these on the same input
"shorten"
If (And (B True) (Or (B False) (B True))) (Add (I 1) (Mul (I 2) (I 3))) (I 0)
If (And (B True) (B True)) (Add (I 1) (I 6)) (I 0)
If (B True) (I 7) (I 0)
I 7
"reduce"
If (And (B True) (Or (B False) (B True))) (Add (I 1) (Mul (I 2) (I 3))) (I 0)
I 7
Partial reduction
Your question implies that you sometimes expect that expressions can't be reduced completely. I'll extend your example to include something to demonstrate this case, by adding a variable, Var.
data Expression a where
Var :: Expression Int
...
We will need to add support for Var to emap:
emap f x = case x of
Var -> Var
...
bind will replace the variable, and evaluateFor performs a complete evaluation, traversing the expression only once.
bind :: Int -> Expression a -> Expression a
bind a x = case x of
Var -> I a
z -> z
evaluateFor :: Int -> Expression a -> Expression a
evaluateFor a = postmap (step . bind a)
Now reduce iterated on an example containing a variable produces the following output
"reduce"
If (And (B True) (Or (B False) (B True))) (Add (I 1) (Mul Var (I 3))) (I 0)
Add (I 1) (Mul Var (I 3))
If the output expression from the reduction is evaluated for a specific value of Var, we can reduce the expression all the way to a literal.
"evaluateFor 5"
Add (I 1) (Mul Var (I 3))
I 16
Applicative
emap can instead be written in terms of an Applicative Functor, and postmap can be made into a generic piece of code suitable for other data types than expressions. How to do so is described in this answer to rampion's follow-up question.

Related

floating pass of fully lazy lambda lifting?

I'm reading implementing functional languages: a tutorial, and encountered a problem when implementing floating pass of fully lazy lambda lifting.
I would like to describe how floating works to make this question clear, if you are familiar with it, just skip to the question below.
The concept is introduced at page 246 in the paper, and mainly implemented at page 256-257.
In case of let(rec) expression, it says:
we must put floated definitions of right-hand side before the let(rec) expression itself, since the right-hand side may depend on them.
for example:
\a ->
let f = \b -> b + (let $mfe = a * a in $mfe)
in f
The variable $mfe is a maximal free expression (MFE) which identified by previous pass, when dealing with let f expression, we float f out as one group and append it to [(1, [($mfe, a * a)])], which was obtained from right-hand side of let f. Here, the first component 1 indicates the free level of the group.
When backtracking to \a -> f abstraction, we found that both $mfe and f are bound by it, hence we should install them here:
\a ->
let $mfe = a * a
in
let f = \b -> b + $mfe
in f
The Question
Suppose we have a program like this:
f = \x -> \y ->
letrec
a = \p -> cons p (b x)
b = \q -> cons q (a y)
in pair (a 1) (b 2)
The free level of b x and a y will be 2, since y has level 2 and they are in a same group.
While free level of cons p (b x) and cons q (a y) is 3, thus b x and a y will be marked as MFEs (Did I make mistakes here?).
Using the algorithm given by SPJ, the program will be transformed into:
f = \x -> \y ->
let $ay = a y -- `a` is not defined yet!
in
let $bx = b x -- and `b`
in
letrec
a = \p -> cons p $bx
b = \q -> cons q $ay
in pair (a 1) (b 2)
I think the MFEs in the right-hand side of let(rec) expression should not escape the let(rec) scope whenever it referenced to the left-hand side, and the correct result might like this:
f = \x -> \y ->
letrec
$ay = a y
$bx = b x
a = \p -> cons p $bx
b = \q -> cons q $ay
in pair (a 1) (b 2)
Does the paper wrong? or I just misunderstood it.

Is there a way to elegantly represent this pattern in Haskell?

Mind the pure function below, in an imperative language:
def foo(x,y):
x = f(x) if a(x)
if c(x):
x = g(x)
else:
x = h(x)
x = f(x)
y = f(y) if a(y)
x = g(x) if b(y)
return [x,y]
That function represents a style where you have to incrementally update variables. It can be avoided in most cases, but there are situations where that pattern is unavoidable - for example, writing a cooking procedure for a robot, which inherently requires a series of steps and decisions. Now, imagine we were trying to represent foo in Haskell.
foo x0 y0 =
let x1 = if a x0 then f x0 else x0 in
let x2 = if c x1 then g x1 else h x1 in
let x3 = f x2 in
let y1 = if a y0 then f y0 else y0 in
let x4 = if b y1 then g x3 else x3 in
[x4,y1]
That code works, but it is too complicated and error prone due to the need for manually managing the numeric tags. Notice that, after x1 is set, x0's value should never be used again, but it still can. If you accidentally use it, that will be an undetected error.
I've managed to solve this problem using the State monad:
fooSt x y = execState (do
(x,y) <- get
when (a x) (put (f x, y))
(x,y) <- get
if c x
then put (g x, y)
else put (h x, y)
(x,y) <- get
put (f x, y)
(x,y) <- get
when (a y) (put (x, f y))
(x,y) <- get
when (b y) (put (g x, x))) (x,y)
This way, need for tag-tracking goes away, as well as the risk of accidentally using an outdated variable. But now the code is verbose and much harder to understand, mainly due to the repetition of (x,y) <- get.
So: what is a more readable, elegant and safe way to express this pattern?
Full code for testing.
Your goals
While the direct transformation of imperative code would usually lead to the ST monad and STRef, lets think about what you actually want to do:
You want to manipulate values conditionally.
You want to return that value.
You want to sequence the steps of your manipulation.
Requirements
Now this indeed looks first like the ST monad. However, if we follow the simple monad laws, together with do notation, we see that
do
x <- return $ if somePredicate x then g x
else h x
x <- return $ if someOtherPredicate x then a x
else b x
is exactly what you want. Since you need only the most basic functions of a monad (return and >>=), you can use the simplest:
The Identity monad
foo x y = runIdentity $ do
x <- return $ if a x then f x
else x
x <- return $ if c x then g x
else h x
x <- return $ f x
y <- return $ if a x then f y
else y
x <- return $ if b y then g x
else y
return (x,y)
Note that you cannot use let x = if a x then f x else x, because in this case the x would be the same on both sides, whereas
x <- return $ if a x then f x
else x
is the same as
(return $ if a x then (f x) else x) >>= \x -> ...
and the x in the if expression is clearly not the same as the resulting one, which is going to be used in the lambda on the right hand side.
Helpers
In order to make this more clear, you can add helpers like
condM :: Monad m => Bool -> a -> a -> m a
condM p a b = return $ if p then a else b
to get an even more concise version:
foo x y = runIdentity $ do
x <- condM (a x) (f x) x
x <- fmap f $ condM (c x) (g x) (h x)
y <- condM (a y) (f y) y
x <- condM (b y) (g x) x
return (x , y)
Ternary craziness
And while we're up to it, lets crank up the craziness and introduce a ternary operator:
(?) :: Bool -> (a, a) -> a
b ? ie = if b then fst ie else snd ie
(??) :: Monad m => Bool -> (a, a) -> m a
(??) p = return . (?) p
(#) :: a -> a -> (a, a)
(#) = (,)
infixr 2 ??
infixr 2 #
infixr 2 ?
foo x y = runIdentity $ do
x <- a x ?? f x # x
x <- fmap f $ c x ?? g x # h x
y <- a y ?? f y # y
x <- b y ?? g x # x
return (x , y)
But the bottomline is, that the Identity monad has everything you need for this task.
Imperative or non-imperative
One might argue whether this style is imperative. It's definitely a sequence of actions. But there's no state, unless you count the bound variables. However, then a pack of let … in … declarations also gives an implicit sequence: you expect the first let to bind first.
Using Identity is purely functional
Either way, the code above doesn't introduce mutability. x doesn't get modified, instead you have a new x or y shadowing the last one. This gets clear if you desugar the do expression as noted above:
foo x y = runIdentity $
a x ?? f x # x >>= \x ->
c x ?? g x # h x >>= \x ->
return (f x) >>= \x ->
a y ?? f y # y >>= \y ->
b y ?? g x # x >>= \x ->
return (x , y)
Getting rid of the simplest monad
However, if we would use (?) on the left hand side and remove the returns, we could replace (>>=) :: m a -> (a -> m b) -> m b) by something with type a -> (a -> b) -> b. This just happens to be flip ($). We end up with:
($>) :: a -> (a -> b) -> b
($>) = flip ($)
infixr 0 $> -- same infix as ($)
foo x y = a x ? f x # x $> \x ->
c x ? g x # h x $> \x ->
f x $> \x ->
a y ? f y # y $> \y ->
b y ? g x # x $> \x ->
(x, y)
This is very similar to the desugared do expression above. Note that any usage of Identity can be transformed into this style, and vice-versa.
The problem you state looks like a nice application for arrows:
import Control.Arrow
if' :: (a -> Bool) -> (a -> a) -> (a -> a) -> a -> a
if' p f g x = if p x then f x else g x
foo2 :: (Int,Int) -> (Int,Int)
foo2 = first (if' c g h . if' a f id) >>>
first f >>>
second (if' a f id) >>>
(\(x,y) -> (if b y then g x else x , y))
in particular, first lifts a function a -> b to (a,c) -> (b,c), which is more idiomatic.
Edit: if' allows a lift
import Control.Applicative (liftA3)
-- a functional if for lifting
if'' b x y = if b then x else y
if' :: (a -> Bool) -> (a -> a) -> (a -> a) -> a -> a
if' = liftA3 if''
I'd probably do something like this:
foo x y = ( x', y' )
where x' = bgf y' . cgh . af $ x
y' = af y
af z = (if a z then f else id) z
cgh z = (if c z then g else h) z
bg y x = (if b y then g else id) x
For something more complicated, you may want to consider using lens:
whenM :: Monad m => m Bool -> m () -> m ()
whenM c a = c >>= \res -> when res a
ifM :: Monad m => m Bool -> m a -> m a -> m a
ifM mb ml mr = mb >>= \b -> if b then ml else mr
foo :: Int -> Int -> (Int, Int)
foo = curry . execState $ do
whenM (uses _1 a) $
_1 %= f
ifM (uses _1 c)
(_1 %= g)
(_1 %= h)
_1 %= f
whenM (uses _2 a) $
_2 %= f
whenM (uses _2 b) $ do
_1 %= g
And there's nothing stopping you from using more descriptive variable names:
foo :: Int -> Int -> (Int, Int)
foo = curry . execState $ do
let x :: Lens (a, c) (b, c) a b
x = _1
y :: Lens (c, a) (c, b) a b
y = _2
whenM (uses x a) $
x %= f
ifM (uses x c)
(x %= g)
(x %= h)
x %= f
whenM (uses y a) $
y %= f
whenM (uses y b) $ do
x %= g
This is a job for the ST (state transformer) library.
ST provides:
Stateful computations in the form of the ST type. These look like ST s a for a computation that results in a value of type a, and may be run with runST to obtain a pure a value.
First-class mutable references in the form of the STRef type. The newSTRef a action creates a new STRef s a reference with an initial value of a, and which can be read with readSTRef ref and written with writeSTRef ref a. A single ST computation can use any number of STRef references internally.
Together, these let you express the same mutable variable functionality as in your imperative example.
To use ST and STRef, we need to import:
{-# LANGUAGE NoMonomorphismRestriction #-}
import Control.Monad.ST.Safe
import Data.STRef
Instead of using the low-level readSTRef and writeSTRef all over the place, we can define the following helpers to match the imperative operations that the Python-style foo example uses:
-- STRef assignment.
(=:) :: STRef s a -> ST s a -> ST s ()
ref =: x = writeSTRef ref =<< x
-- STRef function application.
($:) :: (a -> b) -> STRef s a -> ST s b
f $: ref = f `fmap` readSTRef ref
-- Postfix guard syntax.
if_ :: Monad m => m () -> m Bool -> m ()
action `if_` guard = act' =<< guard
where act' b = if b then action
else return ()
This lets us write:
ref =: x to assign the value of ST computation x to the STRef ref.
(f $: ref) to apply a pure function f to the STRef ref.
action `if_` guard to execute action only if guard results in True.
With these helpers in place, we can faithfully translate the original imperative definition of foo into Haskell:
a = (< 10)
b = even
c = odd
f x = x + 3
g x = x * 2
h x = x - 1
f3 x = x + 2
-- A stateful computation that takes two integer STRefs and result in a final [x,y].
fooST :: Integral n => STRef s n -> STRef s n -> ST s [n]
fooST x y = do
x =: (f $: x) `if_` (a $: x)
x' <- readSTRef x
if c x' then
x =: (g $: x)
else
x =: (h $: x)
x =: (f $: x)
y =: (f $: y) `if_` (a $: y)
x =: (g $: x) `if_` (b $: y)
sequence [readSTRef x, readSTRef y]
-- Pure wrapper: simply call fooST with two fresh references, and run it.
foo :: Integral n => n -> n -> [n]
foo x y = runST $ do
x' <- newSTRef x
y' <- newSTRef y
fooST x' y'
-- This will print "[9,3]".
main = print (foo 0 0)
Points to note:
Although we first had to define some syntactical helpers (=:, $:, if_) before translating foo, this demonstrates how you can use ST and STRef as a foundation to grow your own little imperative language that's directly suited to the problem at hand.
Syntax aside, this matches the structure of the original imperative definition exactly, without any error-prone restructuring. Any minor changes to the original example can be mirrored directly to Haskell. (The addition of the temporary x' <- readSTRef x binding in the Haskell code is only in order to use it with the native if/else syntax: if desired, this can be replaced with an appropriate ST-based if/else construct.)
The above code demonstrates giving both pure and stateful interfaces to the same computation: pure callers can use foo without knowing that it uses mutable state internally, while ST callers can directly use fooST (and for example provide it with existing STRefs to modify).
#Sibi said it best in his comment:
I would suggest you to stop thinking imperatively and rather think in a functional way. I agree that it will take some time to getting used to the new pattern, but try to translate imperative ideas to functional languages isn't a great approach.
Practically speaking, your chain of let can be a good starting point:
foo x0 y0 =
let x1 = if a x0 then f x0 else x0 in
let x2 = if c x1 then g x1 else h x1 in
let x3 = f x2 in
let y1 = if a y0 then f y0 else y0 in
let x4 = if b y1 then g x3 else x3 in
[x4,y1]
But I would suggest using a single let and giving descriptive names to the intermediate stages.
In this example unfortunately I don't have a clue what the various x's and y's do, so I cannot suggest meaningful names. In real code you would use names such as x_normalized, x_translated, or such, instead of x1 and x2, to describe what those values really are.
In fact, in a let or where you don't really have variables: they're just shorthand names you give to intermediate results, to make it easy to compose the final expression (the one after in or before the where.)
This is the spirit behind the x_bar and x_baz below. Try to come up with names that are reasonably descriptive, given the context of your code.
foo x y =
let x_bar = if a x then f x else x
x_baz = f if c x_bar then g x_bar else h x_bar
y_bar = if a y then f y else y
x_there = if b y_bar then g x_baz else x_baz
in [x_there, y_bar]
Then you can start recognizing patterns that were hidden in the imperative code. For example, x_bar and y_bar are basically the same transformation, applied respectively to x and y: that's why they have the same suffix "_bar" in this nonsensical example; then your x2 probably doesn't need an intermediate name , since you can just apply f to the result of the entire "if c then g else h".
Going on with the pattern recognition, you should factor out the transformations that you are applying to variables into sub-lambdas (or whatever you call the auxiliary functions defined in a where clause.)
Again, I don't have a clue what the original code did, so I cannot suggest meaningful names for the auxiliary functions. In a real application, f_if_a would be called normalize_if_needed or thaw_if_frozen or mow_if_overgrown... you get the idea:
foo x y =
let x_bar = f_if_a x
y_bar = f_if_a y
x_baz = f (g_if_c_else_h x_bar)
x_there = g_if_b x_baz y_bar
in [x_there, y_bar]
where
f_if_a x
| a x = f x
| otherwise = x
g_if_c_else_h x
| c x = g x
| otherwise = h x
g_if_b x y
| b y = g x
| otherwise = x
Don't disregard this naming business.
The whole point of Haskell and other pure functional languages is to express algorithms without the assignment operator, meaning the tool that can modify the value of an existing variable.
The names you give to things inside a function definition, whether introduced as arguments, let, or where, can only refer to one value (or auxiliary function) throughout the entire definition, so that your code can be more easily reasoned about and proven correct.
If you don't give them meaningful names (and conversely giving your code a meaningful structure) then you're missing out on the entire purpose of Haskell.
(IMHO the other answers so far, citing monads and other shenanigans, are barking up the wrong tree.)
I always prefer layering state transformers to using a single state over a tuple: it definitely declutters things by letting you "focus" on a specific layer (representations of the x and y variables in our case):
import Control.Monad.Trans.Class
import Control.Monad.Trans.State
foo :: x -> y -> (x, y)
foo x y =
(flip runState) y $ (flip execStateT) x $ do
get >>= \v -> when (a v) (put (f v))
get >>= \v -> put ((if c v then g else h) v)
modify f
lift $ get >>= \v -> when (a v) (put (f v))
lift get >>= \v -> when (b v) (modify g)
The lift function allows us to focus on the inner state layer, which is y.

Why is the type of this function (a -> a) -> a?

Why is the type of this function (a -> a) -> a?
Prelude> let y f = f (y f)
Prelude> :t y
y :: (t -> t) -> t
Shouldn't it be an infinite/recursive type?
I was going to try and put into words what I think it's type should be, but I just can't do it for some reason.
y :: (t -> t) -> ?WTFIsGoingOnOnTheRHS?
I don't get how f (y f) resolves to a value. The following makes a little more sense to me:
Prelude> let y f x = f (y f) x
Prelude> :t y
y :: ((a -> b) -> a -> b) -> a -> b
But it's still ridiculously confusing. What's going on?
Well, y has to be of type (a -> b) -> c, for some a, b and c we don't know yet; after all, it takes a function, f, and applies it to an argument, so it must be a function taking a function.
Since y f = f x (again, for some x), we know that the return type of y must be the return type of f itself. So, we can refine the type of y a bit: it must be (a -> b) -> b for some a and b we don't know yet.
To figure out what a is, we just have to look at the type of the value passed to f. It's y f, which is the expression we're trying to figure out the type of right now. We're saying that the type of y is (a -> b) -> b (for some a, b, etc.), so we can say that this application of y f must be of type b itself.
So, the type of the argument to f is b. Put it all back together, and we get (b -> b) -> b — which is, of course, the same thing as (a -> a) -> a.
Here's a more intuitive, but less precise view of things: we're saying that y f = f (y f), which we can expand to the equivalent y f = f (f (y f)), y f = f (f (f (y f))), and so on. So, we know that we can always apply another f around the whole thing, and since the "whole thing" in question is the result of applying f to an argument, f has to have the type a -> a; and since we just concluded that the whole thing is the result of applying f to an argument, the return type of y must be that of f itself — coming together, again, as (a -> a) -> a.
Just two points to add to other people's answers.
The function you're defining is usually called fix, and it is a fixed-point combinator: a function that computes the fixed point of another function. In mathematics, the fixed point of a function f is an argument x such that f x = x. This already allows you to infer that the type of fix has to be (a -> a) -> a; "function that takes a function from a to a, and returns an a."
You've called your function y, which seems to be after the Y combinator, but this is an inaccurate name: the Y combinator is one specific fixed point combinator, but not the same as the one you've defined here.
I don't get how f (y f) resolves to a value.
Well, the trick is that Haskell is a non-strict (a.k.a. "lazy") language. The calculation of f (y f) can terminate if f doesn't need to evaluate its y f argument in all cases. So, if you're defining factorial (as John L illustrates), fac (y fac) 1 evaluates to 1 without evaluating y fac.
Strict languages can't do this, so in those languages you cannot define a fixed-point combinator in this way. In those languages, the textbook fixed-point combinator is the Y combinator proper.
#ehird's done a good job of explaining the type, so I'd like to show how it can resolve to a value with some examples.
f1 :: Int -> Int
f1 _ = 5
-- expansion of y applied to f1
y f1
f1 (y f1) -- definition of y
5 -- definition of f1 (the argument is ignored)
-- here's an example that uses the argument, a factorial function
fac :: (Int -> Int) -> (Int -> Int)
fac next 1 = 1
fac next n = n * next (n-1)
y fac :: Int -> Int
fac (y fac) -- def. of y
-- at this point, further evaluation requires the next argument
-- so let's try 3
fac (y fac) 3 :: Int
3 * (y fac) 2 -- def. of fac
3 * (fac (y fac) 2) -- def. of y
3 * (2 * (y fac) 1) -- def. of fac
3 * (2 * (fac (y fac) 1) -- def. of y
3 * (2 * 1) -- def. of fac
You can follow the same steps with any function you like to see what will happen. Both of these examples converge to values, but that doesn't always happen.
Let me tell about a combinator. It's called the "fixpoint combinator" and it has the following property:
The Property: the "fixpoint combinator" takes a function f :: (a -> a) and discovers a "fixed point" x :: a of that function such that f x == x. Some implementations of the fixpoint combinator might be better or worse at "discovering", but assuming it terminates, it will produce a fixed point of the input function. Any function that satisfies The Property can be called a "fixpoint combinator".
Call this "fixpoint combinator" y. Based on what we just said, the following are true:
-- as we said, y's input is f :: a -> a, and its output is x :: a, therefore
y :: (a -> a) -> a
-- let x be the fixed point discovered by applying f to y
y f == x -- because y discovers x, a fixed point of f, per The Property
f x == x -- the behavior of a fixed point, per The Property
-- now, per substitution of "x" with "f x" in "y f == x"
y f == f x
-- again, per substitution of "x" with "y f" in the previous line
y f == f (y f)
So there you go. You have defined y in terms of the essential property of the fixpoint combinator:
y f == f (y f). Instead of assuming that y f discovers x, you can assume that x represents a divergent computation, and still come to the same conclusion (iinm).
Since your function satisfies The Property, we can conclude that it is a fixpoint combinator, and that the other properties we have stated, including the type, are applicable to your function.
This isn't exactly a solid proof, but I hope it provides additional insight.

Y Combinator in Haskell

Is it possible to write the Y Combinator in Haskell?
It seems like it would have an infinitely recursive type.
Y :: f -> b -> c
where f :: (f -> b -> c)
or something. Even a simple slightly factored factorial
factMaker _ 0 = 1
factMaker fn n = n * ((fn fn) (n -1)
{- to be called as
(factMaker factMaker) 5
-}
fails with "Occurs check: cannot construct the infinite type: t = t -> t2 -> t1"
(The Y combinator looks like this
(define Y
(lambda (X)
((lambda (procedure)
(X (lambda (arg) ((procedure procedure) arg))))
(lambda (procedure)
(X (lambda (arg) ((procedure procedure) arg)))))))
in scheme)
Or, more succinctly as
(λ (f) ((λ (x) (f (λ (a) ((x x) a))))
(λ (x) (f (λ (a) ((x x) a))))))
For the applicative order
And
(λ (f) ((λ (x) (f (x x)))
(λ (x) (f (x x)))))
Which is just a eta contraction away for the lazy version.
If you prefer short variable names.
Here's a non-recursive definition of the y-combinator in haskell:
newtype Mu a = Mu (Mu a -> a)
y f = (\h -> h $ Mu h) (\x -> f . (\(Mu g) -> g) x $ x)
hat tip
The Y combinator can't be typed using Hindley-Milner types, the polymorphic lambda calculus on which Haskell's type system is based. You can prove this by appeal to the rules of the type system.
I don't know if it's possible to type the Y combinator by giving it a higher-rank type. It would surprise me, but I don't have a proof that it's not possible. (The key would be to identify a suitably polymorphic type for the lambda-bound x.)
If you want a fixed-point operator in Haskell, you can define one very easily because in Haskell, let-binding has fixed-point semantics:
fix :: (a -> a) -> a
fix f = f (fix f)
You can use this in the usual way to define functions and even some finite or infinite data structures.
It is also possible to use functions on recursive types to implement fixed points.
If you're interested in programming with fixed points, you want to read Bruce McAdam's technical report That About Wraps it Up.
The canonical definition of the Y combinator is as follows:
y = \f -> (\x -> f (x x)) (\x -> f (x x))
But it doesn't type check in Haskell because of the x x, since it would require an infinite type:
x :: a -> b -- x is a function
x :: a -- x is applied to x
--------------------------------
a = a -> b -- infinite type
If the type system were to allow such recursive types, it would make type checking undecidable (prone to infinite loops).
But the Y combinator will work if you force it to typecheck, e.g. by using unsafeCoerce :: a -> b:
import Unsafe.Coerce
y :: (a -> a) -> a
y = \f -> (\x -> f (unsafeCoerce x x)) (\x -> f (unsafeCoerce x x))
main = putStrLn $ y ("circular reasoning works because " ++)
This is unsafe (obviously). rampion's answer demonstrates a safer way to write a fixpoint combinator in Haskell without using recursion.
Oh
this wiki page and
This Stack Overflow answer seem to answer my question.
I will write up more of an explanation later.
Now, I've found something interesting about that Mu type. Consider S = Mu Bool.
data S = S (S -> Bool)
If one treats S as a set and that equals sign as isomorphism, then the equation becomes
S ⇋ S -> Bool ⇋ Powerset(S)
So S is the set of sets that are isomorphic to their powerset!
But we know from Cantor's diagonal argument that the cardinality of Powerset(S) is always strictly greater than the cardinality of S, so they are never isomorphic.
I think this is why you can now define a fixed point operator, even though you can't without one.
Just to make rampion's code more readable:
-- Mu :: (Mu a -> a) -> Mu a
newtype Mu a = Mu (Mu a -> a)
w :: (Mu a -> a) -> a
w h = h (Mu h)
y :: (a -> a) -> a
y f = w (\(Mu x) -> f (w x))
-- y f = f . y f
in which w stands for the omega combinator w = \x -> x x, and y stands for the y combinator y = \f -> w . (f w).

Wadler, "Monads for Functional Programming," Section 2.8

Edit II: Ah, okay: I wasn't understanding how a and b were being bound in the definition of eval! Now I do. If anyone's interested, this is a diagram tracking a and b. I'm a pretty big fan of diagrams. Drawing arrows really improved my Haskell, I swear.
A Diagram of an eval call (PDF)
Sometimes I feel really dense.
In section 2.8 of Wadler's "Monads for Functional Programming," he introduces state into a simple evaluation function. The original (non-monadic) function tracks state using a series of let expressions, and is easy to follow:
data Term = Con Int | Div Term Term
deriving (Eq, Show)
type M a = State -> (a, State)
type State = Int
eval' :: Term -> M Int
eval' (Con a) x = (a, x)
eval' (Div t u) x = let (a, y) = eval' t x in
let (b, z) = eval' u y in
(a `div` b, z + 1)
The definitions of unit and bind for the monadic evaluator are similarly straightforward:
unit :: a -> M a
unit a = \x -> (a, x)
(>>=) :: M a -> (a -> M b) -> M b
m >>= k = \x -> let (a, y) = m x in
let (b, z) = k a y in
(b, z)
Here, (>>=) accepts a monadic value m :: M a, a function k :: a -> M b, and outputs a monadic value M b. The value of m is dependent on the value substituted for x in the lambda expression.
Wadler then introduces the function tick:
tick :: M ()
tick = \x -> ((), x + 1)
Again, straightforward. What isn't straightforward, however, is how to chain these functions together to produce an evaluation function that returns the number of division operators performed. Specifically, I don't understand:
(1) How tick is implemented. For instance, the following is a valid function call:
(tick >>= \() -> unit (div 4 2)) 0
~> (2, 1)
However, I can't evaluate it correctly by hand (indicating that I misunderstand something). In particular: (a) The result of evaluating tick at 0 is ((), 0), so How does the lambda expression accept ()? (b) If a is the first element of the pair returned by calling tick at 0, how does unit get evaluated?
(2) How to combine tick and unit to track the number of division operators performed. While the non-monadic evaluator is not problematic, the use of bind is confusing me here.
Edit: Thanks, everybody. I think my misunderstanding was the role of the lambda expression, '() -> unit (div 4 2)'. If I understanding it correctly,
(tick >>= (\() -> unit (div m n)) x
expands to
(\x -> let (a, y) = tick x in
let (b, z) = (\() -> unit (div m n) a y) in
(b, z)) x
When 'a' is applied to '() -> unit (div m n) a y', no 'practical result' is yielded. The same effect could be achieved by binding any variable with a lambda operator, and substituting a value for it. The versatility of bind, in this case, is that any value M a can be passed to it. As noted, a value M a represents a computation, for instance, 'eval.' Hence:
eval (Con a) = unit a
eval (Div t u) = eval t >>= (\a ->
eval u >>= (\b ->
tick >>= (\c -> unit (a `div` b))))
If I understand correctly, 'eval t' is substituted for m and the remainder of the expression, the function
'(\a -> eval u >>= (\b -> tick >>= (\c -> unit (a `div` b))))'
is substituted for k. The result of evaluating 'eval t' is bound to (a, y), and the result of evaluating k is bound to (b, z). I have a ways to go, but this clears it up somewhat. Thanks.
You can evaluate the expression by hand like this:
(tick >>= \() -> unit (div 4 2)) 0
If you insert tick and \() -> unit (div 4 2) into the definition of >>=, this becomes:
(\x -> let (a, y) = tick x in
let (b, z) = (\() -> unit (div 4 2)) a y in
(b, z)) 0
If you now apply the function by substituting 0 for x, you get:
let (a, y) = tick 0 in
let (b, z) = (\() -> unit (div 4 2)) a y in
(b, z)
Now let's apply tick to 0:
let (a, y) = ((), 0 + 1) in
let (b, z) = (\() -> unit (div 4 2)) a y in
(b, z)
So a becomes () and y becomes 0+1 which is 1. So we have
let (b, z) = (\() -> unit (div 4 2)) () 1 in
(b, z)
If we apply the function to () we get
let (b,z) = unit (div 4 2) 1 in
(b,z)
If we apply unit, we get
let (b,z) = (div 4 2, 1) in
(b,z)
div 4 2 is 2, so the result is (2,1).
1a)
The result of evaluating tick at 0 is ((), 1) -- look at the code again, it increments the input value by one.
The lambda expression accepts () because it is the right-hand side of the bind operation, meaning its type is expected to be (() -> M b). So it takes the () as its first parameter, then uses "unit" as the M b item.
1b)
I'm not quite sure what you're asking here. The bind operator is defined to pass the result and state from the first operation (which is () and 1 respectively) into the second operation, so unit ends up being passed 1 as the current state (the result, (), was swallowed by the lambda expression). The current state is kept as-is by the unit function, and the result is the result of 4 div 2, i.e. 2.
2)
Presumably you will want some function of the type:
divCounted :: Int -> Int -> M Int
Which either combines tick and unit (similar to how you have), making sure to tick once to increase the count, and use unit to give back the result.
1a) The result of evaluating tick at 0 is ((), 1), so How does the lambda expression accept ()?
The key about the state monad is that bind takes care of the second component of the pair, the state. The lambda expression only needs to handle the (), the first component of the pair, the return value.
In general, the key about the monad M is that it abstracts the whole business of threading the state away. You should think of a value of type M a as a computer program that returns a value of type a while also messing with the state. The insight is that two operations, unit and >>=, are enough to write any such program; the entire business of constructing and deconstructing pairs (a,s) can be captured in those two functions.

Resources