Related
Mind the pure function below, in an imperative language:
def foo(x,y):
x = f(x) if a(x)
if c(x):
x = g(x)
else:
x = h(x)
x = f(x)
y = f(y) if a(y)
x = g(x) if b(y)
return [x,y]
That function represents a style where you have to incrementally update variables. It can be avoided in most cases, but there are situations where that pattern is unavoidable - for example, writing a cooking procedure for a robot, which inherently requires a series of steps and decisions. Now, imagine we were trying to represent foo in Haskell.
foo x0 y0 =
let x1 = if a x0 then f x0 else x0 in
let x2 = if c x1 then g x1 else h x1 in
let x3 = f x2 in
let y1 = if a y0 then f y0 else y0 in
let x4 = if b y1 then g x3 else x3 in
[x4,y1]
That code works, but it is too complicated and error prone due to the need for manually managing the numeric tags. Notice that, after x1 is set, x0's value should never be used again, but it still can. If you accidentally use it, that will be an undetected error.
I've managed to solve this problem using the State monad:
fooSt x y = execState (do
(x,y) <- get
when (a x) (put (f x, y))
(x,y) <- get
if c x
then put (g x, y)
else put (h x, y)
(x,y) <- get
put (f x, y)
(x,y) <- get
when (a y) (put (x, f y))
(x,y) <- get
when (b y) (put (g x, x))) (x,y)
This way, need for tag-tracking goes away, as well as the risk of accidentally using an outdated variable. But now the code is verbose and much harder to understand, mainly due to the repetition of (x,y) <- get.
So: what is a more readable, elegant and safe way to express this pattern?
Full code for testing.
Your goals
While the direct transformation of imperative code would usually lead to the ST monad and STRef, lets think about what you actually want to do:
You want to manipulate values conditionally.
You want to return that value.
You want to sequence the steps of your manipulation.
Requirements
Now this indeed looks first like the ST monad. However, if we follow the simple monad laws, together with do notation, we see that
do
x <- return $ if somePredicate x then g x
else h x
x <- return $ if someOtherPredicate x then a x
else b x
is exactly what you want. Since you need only the most basic functions of a monad (return and >>=), you can use the simplest:
The Identity monad
foo x y = runIdentity $ do
x <- return $ if a x then f x
else x
x <- return $ if c x then g x
else h x
x <- return $ f x
y <- return $ if a x then f y
else y
x <- return $ if b y then g x
else y
return (x,y)
Note that you cannot use let x = if a x then f x else x, because in this case the x would be the same on both sides, whereas
x <- return $ if a x then f x
else x
is the same as
(return $ if a x then (f x) else x) >>= \x -> ...
and the x in the if expression is clearly not the same as the resulting one, which is going to be used in the lambda on the right hand side.
Helpers
In order to make this more clear, you can add helpers like
condM :: Monad m => Bool -> a -> a -> m a
condM p a b = return $ if p then a else b
to get an even more concise version:
foo x y = runIdentity $ do
x <- condM (a x) (f x) x
x <- fmap f $ condM (c x) (g x) (h x)
y <- condM (a y) (f y) y
x <- condM (b y) (g x) x
return (x , y)
Ternary craziness
And while we're up to it, lets crank up the craziness and introduce a ternary operator:
(?) :: Bool -> (a, a) -> a
b ? ie = if b then fst ie else snd ie
(??) :: Monad m => Bool -> (a, a) -> m a
(??) p = return . (?) p
(#) :: a -> a -> (a, a)
(#) = (,)
infixr 2 ??
infixr 2 #
infixr 2 ?
foo x y = runIdentity $ do
x <- a x ?? f x # x
x <- fmap f $ c x ?? g x # h x
y <- a y ?? f y # y
x <- b y ?? g x # x
return (x , y)
But the bottomline is, that the Identity monad has everything you need for this task.
Imperative or non-imperative
One might argue whether this style is imperative. It's definitely a sequence of actions. But there's no state, unless you count the bound variables. However, then a pack of let … in … declarations also gives an implicit sequence: you expect the first let to bind first.
Using Identity is purely functional
Either way, the code above doesn't introduce mutability. x doesn't get modified, instead you have a new x or y shadowing the last one. This gets clear if you desugar the do expression as noted above:
foo x y = runIdentity $
a x ?? f x # x >>= \x ->
c x ?? g x # h x >>= \x ->
return (f x) >>= \x ->
a y ?? f y # y >>= \y ->
b y ?? g x # x >>= \x ->
return (x , y)
Getting rid of the simplest monad
However, if we would use (?) on the left hand side and remove the returns, we could replace (>>=) :: m a -> (a -> m b) -> m b) by something with type a -> (a -> b) -> b. This just happens to be flip ($). We end up with:
($>) :: a -> (a -> b) -> b
($>) = flip ($)
infixr 0 $> -- same infix as ($)
foo x y = a x ? f x # x $> \x ->
c x ? g x # h x $> \x ->
f x $> \x ->
a y ? f y # y $> \y ->
b y ? g x # x $> \x ->
(x, y)
This is very similar to the desugared do expression above. Note that any usage of Identity can be transformed into this style, and vice-versa.
The problem you state looks like a nice application for arrows:
import Control.Arrow
if' :: (a -> Bool) -> (a -> a) -> (a -> a) -> a -> a
if' p f g x = if p x then f x else g x
foo2 :: (Int,Int) -> (Int,Int)
foo2 = first (if' c g h . if' a f id) >>>
first f >>>
second (if' a f id) >>>
(\(x,y) -> (if b y then g x else x , y))
in particular, first lifts a function a -> b to (a,c) -> (b,c), which is more idiomatic.
Edit: if' allows a lift
import Control.Applicative (liftA3)
-- a functional if for lifting
if'' b x y = if b then x else y
if' :: (a -> Bool) -> (a -> a) -> (a -> a) -> a -> a
if' = liftA3 if''
I'd probably do something like this:
foo x y = ( x', y' )
where x' = bgf y' . cgh . af $ x
y' = af y
af z = (if a z then f else id) z
cgh z = (if c z then g else h) z
bg y x = (if b y then g else id) x
For something more complicated, you may want to consider using lens:
whenM :: Monad m => m Bool -> m () -> m ()
whenM c a = c >>= \res -> when res a
ifM :: Monad m => m Bool -> m a -> m a -> m a
ifM mb ml mr = mb >>= \b -> if b then ml else mr
foo :: Int -> Int -> (Int, Int)
foo = curry . execState $ do
whenM (uses _1 a) $
_1 %= f
ifM (uses _1 c)
(_1 %= g)
(_1 %= h)
_1 %= f
whenM (uses _2 a) $
_2 %= f
whenM (uses _2 b) $ do
_1 %= g
And there's nothing stopping you from using more descriptive variable names:
foo :: Int -> Int -> (Int, Int)
foo = curry . execState $ do
let x :: Lens (a, c) (b, c) a b
x = _1
y :: Lens (c, a) (c, b) a b
y = _2
whenM (uses x a) $
x %= f
ifM (uses x c)
(x %= g)
(x %= h)
x %= f
whenM (uses y a) $
y %= f
whenM (uses y b) $ do
x %= g
This is a job for the ST (state transformer) library.
ST provides:
Stateful computations in the form of the ST type. These look like ST s a for a computation that results in a value of type a, and may be run with runST to obtain a pure a value.
First-class mutable references in the form of the STRef type. The newSTRef a action creates a new STRef s a reference with an initial value of a, and which can be read with readSTRef ref and written with writeSTRef ref a. A single ST computation can use any number of STRef references internally.
Together, these let you express the same mutable variable functionality as in your imperative example.
To use ST and STRef, we need to import:
{-# LANGUAGE NoMonomorphismRestriction #-}
import Control.Monad.ST.Safe
import Data.STRef
Instead of using the low-level readSTRef and writeSTRef all over the place, we can define the following helpers to match the imperative operations that the Python-style foo example uses:
-- STRef assignment.
(=:) :: STRef s a -> ST s a -> ST s ()
ref =: x = writeSTRef ref =<< x
-- STRef function application.
($:) :: (a -> b) -> STRef s a -> ST s b
f $: ref = f `fmap` readSTRef ref
-- Postfix guard syntax.
if_ :: Monad m => m () -> m Bool -> m ()
action `if_` guard = act' =<< guard
where act' b = if b then action
else return ()
This lets us write:
ref =: x to assign the value of ST computation x to the STRef ref.
(f $: ref) to apply a pure function f to the STRef ref.
action `if_` guard to execute action only if guard results in True.
With these helpers in place, we can faithfully translate the original imperative definition of foo into Haskell:
a = (< 10)
b = even
c = odd
f x = x + 3
g x = x * 2
h x = x - 1
f3 x = x + 2
-- A stateful computation that takes two integer STRefs and result in a final [x,y].
fooST :: Integral n => STRef s n -> STRef s n -> ST s [n]
fooST x y = do
x =: (f $: x) `if_` (a $: x)
x' <- readSTRef x
if c x' then
x =: (g $: x)
else
x =: (h $: x)
x =: (f $: x)
y =: (f $: y) `if_` (a $: y)
x =: (g $: x) `if_` (b $: y)
sequence [readSTRef x, readSTRef y]
-- Pure wrapper: simply call fooST with two fresh references, and run it.
foo :: Integral n => n -> n -> [n]
foo x y = runST $ do
x' <- newSTRef x
y' <- newSTRef y
fooST x' y'
-- This will print "[9,3]".
main = print (foo 0 0)
Points to note:
Although we first had to define some syntactical helpers (=:, $:, if_) before translating foo, this demonstrates how you can use ST and STRef as a foundation to grow your own little imperative language that's directly suited to the problem at hand.
Syntax aside, this matches the structure of the original imperative definition exactly, without any error-prone restructuring. Any minor changes to the original example can be mirrored directly to Haskell. (The addition of the temporary x' <- readSTRef x binding in the Haskell code is only in order to use it with the native if/else syntax: if desired, this can be replaced with an appropriate ST-based if/else construct.)
The above code demonstrates giving both pure and stateful interfaces to the same computation: pure callers can use foo without knowing that it uses mutable state internally, while ST callers can directly use fooST (and for example provide it with existing STRefs to modify).
#Sibi said it best in his comment:
I would suggest you to stop thinking imperatively and rather think in a functional way. I agree that it will take some time to getting used to the new pattern, but try to translate imperative ideas to functional languages isn't a great approach.
Practically speaking, your chain of let can be a good starting point:
foo x0 y0 =
let x1 = if a x0 then f x0 else x0 in
let x2 = if c x1 then g x1 else h x1 in
let x3 = f x2 in
let y1 = if a y0 then f y0 else y0 in
let x4 = if b y1 then g x3 else x3 in
[x4,y1]
But I would suggest using a single let and giving descriptive names to the intermediate stages.
In this example unfortunately I don't have a clue what the various x's and y's do, so I cannot suggest meaningful names. In real code you would use names such as x_normalized, x_translated, or such, instead of x1 and x2, to describe what those values really are.
In fact, in a let or where you don't really have variables: they're just shorthand names you give to intermediate results, to make it easy to compose the final expression (the one after in or before the where.)
This is the spirit behind the x_bar and x_baz below. Try to come up with names that are reasonably descriptive, given the context of your code.
foo x y =
let x_bar = if a x then f x else x
x_baz = f if c x_bar then g x_bar else h x_bar
y_bar = if a y then f y else y
x_there = if b y_bar then g x_baz else x_baz
in [x_there, y_bar]
Then you can start recognizing patterns that were hidden in the imperative code. For example, x_bar and y_bar are basically the same transformation, applied respectively to x and y: that's why they have the same suffix "_bar" in this nonsensical example; then your x2 probably doesn't need an intermediate name , since you can just apply f to the result of the entire "if c then g else h".
Going on with the pattern recognition, you should factor out the transformations that you are applying to variables into sub-lambdas (or whatever you call the auxiliary functions defined in a where clause.)
Again, I don't have a clue what the original code did, so I cannot suggest meaningful names for the auxiliary functions. In a real application, f_if_a would be called normalize_if_needed or thaw_if_frozen or mow_if_overgrown... you get the idea:
foo x y =
let x_bar = f_if_a x
y_bar = f_if_a y
x_baz = f (g_if_c_else_h x_bar)
x_there = g_if_b x_baz y_bar
in [x_there, y_bar]
where
f_if_a x
| a x = f x
| otherwise = x
g_if_c_else_h x
| c x = g x
| otherwise = h x
g_if_b x y
| b y = g x
| otherwise = x
Don't disregard this naming business.
The whole point of Haskell and other pure functional languages is to express algorithms without the assignment operator, meaning the tool that can modify the value of an existing variable.
The names you give to things inside a function definition, whether introduced as arguments, let, or where, can only refer to one value (or auxiliary function) throughout the entire definition, so that your code can be more easily reasoned about and proven correct.
If you don't give them meaningful names (and conversely giving your code a meaningful structure) then you're missing out on the entire purpose of Haskell.
(IMHO the other answers so far, citing monads and other shenanigans, are barking up the wrong tree.)
I always prefer layering state transformers to using a single state over a tuple: it definitely declutters things by letting you "focus" on a specific layer (representations of the x and y variables in our case):
import Control.Monad.Trans.Class
import Control.Monad.Trans.State
foo :: x -> y -> (x, y)
foo x y =
(flip runState) y $ (flip execStateT) x $ do
get >>= \v -> when (a v) (put (f v))
get >>= \v -> put ((if c v then g else h) v)
modify f
lift $ get >>= \v -> when (a v) (put (f v))
lift get >>= \v -> when (b v) (modify g)
The lift function allows us to focus on the inner state layer, which is y.
I am trying to understand "Löb and möb: strange loops in Haskell", but right now the meaning is sleaping away from me, I just don't see why it could be useful. Just to recall function loeb is defined as
loeb :: Functor f => f (f a -> a) -> f a
loeb x = go where go = fmap ($ go) x
or equivalently:
loeb x = go
where go = fmap (\z -> z go) x
In the article there is an example with [] functor and spreadsheets implementation, but it is bit foreign for me just as spreadsheets themselves (never used them).
While I'm understanding that spreadsheet thing, I think it would help a lot for me and others to have more examples, despite lists. Is there any application for loeb for Maybe or other functors?
The primary source (I think) for loeb is Dan Piponi's blog, A Neighborhood of Infinity. There he explains the whole concept in greater detail. I'll replicate a little bit of that as an answer and add some examples.
loeb implements a strange kind of lazy recursion
loeb :: Functor a => a (a x -> x) -> a x
loeb x = fmap (\a -> a (loeb x)) x
Let's imagine we have a type a, where Functor a, and an a-algebra (a function of type a x -> x). You might think of this as a way of computing a value from a structure of values. For instance, here are a few []-algebras:
length :: [Int] -> Int
(!! 3) :: [a] -> a
const 3 :: Num a => [a] -> a
\l -> l !! 2 + l !! 3 :: Num a => [a] -> a
We can see that these a-algebras can use both values stored in the Functor and the structure of the Functor itself.
Another way to think of d :: a x -> x is as a value of x which requires some context–a whole Functorized value a x–in order to be computed. Perhaps this interpretation is more clearly written as Reader (a x) x, emphasizing that this is just a value of x which is delayed, awaiting the a x context to be produced.
type Delay q x = q -> x
Using these ideas we can describe loeb as follows. We're given a f-structure containing some Delayed values, where f is a Functor
Functor f, f (Delay q x)
Naturally, if we were given a q then we could convert this into a not delayed form. In fact, there's only one (non-cheating) function that does this polymorphically:
force :: Functor f => f (Delay q x) -> q -> f x
force f q = fmap ($ q) f
What loeb does is handle the extra tricky case where q is actually force f q, the very result of this function. If you're familiar with fix, this is exactly how we can produce this result.
loeb :: Functor a => a (Delay (a x) x) -> a x
loeb f = fix (force f)
So to make an example, we simply must build a structure containing Delayed values. One natural example of this is to use the list examples from before
> loeb [ length :: [Int] -> Int
, const 3 :: [Int] -> Int
, const 5 :: [Int] -> Int
, (!! 2) :: [Int] -> Int
, (\l -> l !! 2 + l !! 3) :: [Int] -> Int
]
[5, 3, 5, 5, 10]
Here we can see that the list is full of values delayed waiting on the result of evaluating the list. This computation can proceed exactly because there are no loops in data dependency, so the whole thing can just be determined lazily. For instance, const 3 and const 5 are both immediately available as values. length requires that we know the length of the list but none of the values contained so it also proceeds immediately on our fixed-length list. The interesting ones are the values delayed waiting on other values from inside our result list, but since (!! 2) only ends up depending on the third value of the result list, which is determined by const 5 and thus can be immediately available, the computation moves forward. The same idea happens with (\l -> l !! 2 + l !! 3).
So there you have it: loeb completes this strange kind of delayed value recursion. We can use it on any kind of Functor, though. All we need to do is to think of some useful Delayed values.
Chris Kuklewicz's comment notes that there's not a lot you could do interestingly with Maybe as your functor. That's because all of the delayed values over Maybe take the form
maybe (default :: a) (f :: a -> a) :: Maybe a -> a
and all of the interesting values of Maybe (Delay (Maybe a) a) ought to be Just (maybe default f) since loeb Nothing = Nothing. So at the end of the day, the default value never even gets used---we always just have that
loeb (Just (maybe default f)) == fix f
so we may as well write that directly.
You can use it for dynamic programming. The example that comes to mind is the Smith-Waterman algorithm.
import Data.Array
import Data.List
import Control.Monad
data Base = T | C | A | G deriving (Eq,Show)
data Diff = Sub Base Base | Id Base | Del Base | Ins Base deriving (Eq,Show)
loeb x = let go = fmap ($ go) x in go
s a b = if a == b then 1 else 0
smithWaterman a' b' = let
[al,bl] = map length [a',b']
[a,b] = zipWith (\l s -> array (1,s) $ zip [1..] l) [a',b'] [al,bl]
h = loeb $ array ((0,0),(al,bl)) $
[((x,0),const 0) | x <- [0 .. al]] ++
[((0,y),const 0) | y <- [1 .. bl]] ++
[((x,y),\h' -> maximum [
0,
(h' ! (x - 1,y - 1)) + s (a ! x) (b ! y),
(h' ! (x - 1, y)) + 1,
(h' ! (x, y - 1)) + 1
]
) | x <- [1 .. al], y <- [1 .. bl]]
ml l (0,0) = l
ml l (x,0) = ml (Del (a ! x): l) (x - 1, 0)
ml l (0,y) = ml (Ins (b ! y): l) (0, y - 1)
ml l (x,y) = let
(p,e) = maximumBy ((`ap` snd) . (. fst) . (const .) . (. (h !)) . compare . (h !) . fst) [
((x - 1,y),Del (a ! x)),
((y, x - 1),Ins (b ! y)),
((y - 1, x - 1),if a ! x == b ! y then Id (a ! x) else Sub (a ! x) (b ! y))
]
in ml (e : l) p
in ml [] (al,bl)
Here is a live example where it is used for: Map String Float
http://tryplayg.herokuapp.com/try/spreadsheet.hs/edit
with loop detection and loop resolution.
This program calculates speed, time and space. Each one depends on the other two. Each cell has two values: his current entered value and the expression as a function of the other cell values/expressions. circularity is permitted.
The Cell recalculation code uses the famous loeb expression by Dan Piponi in the 2006. Until now by my knowledge there haven't been any materialization of this formula on a real working spreadsheet. this one is close to it. Since loeb enters in a infinite loop when circular expressions are used, the program counts the loops and reduces complexity by progressively substituting formulas by cell values until the expression has no loops
This program is configured for immediate recalculation on cell change, but that can be adapted to allow the modification of more than one cell before recalculation by triggering it by means of a button.
This is blog pos:
http://haskell-web.blogspot.com.es/2014/09/spreadsheet-like-program-in-browser.html
I have a program which produces a series of functions f and g which looks like the following:
step (f,g) = (newF f g, newG f g)
newF f g x = r (f x) (g x)
newG f g x = s (f x) (g x)
foo = iterate step (f0,g0)
Where r and s are some uninteresting functions of f x and g x. I naively hoped that having foo be a list would mean that when I call the n'th f it will not recompute the (n-1)th f if it has already computed it (as would have happened if f and g weren't functions). Is there any way to memoize this without ripping the whole program apart (e.g. evaluating f0 and g0 on all relevant arguments and then working upward)?
You may find Data.MemoCombinators useful (in the data-memocombinators package).
You don't say what argument types your f and g take --- if they both takes integral values then you would use it like this:
import qualified Data.MemoCombinators as Memo
foo = iterate step (Memo.integral f0, Memo.integral g0)
If required, you could memoise the output of each step as well
step (f,g) = (Memo.integral (newF f g), Memo.integral (newG f g))
I hope you don't see this as ripping the whole program apart.
In reply to your comment:
This is the best I can come up with. It's untested, but should be working along the right lines.
I worry that converting between Double and Rational is needlessly inefficient --- if there was a Bits instance for Double we could use Memo.bits instead. So this might not ultimately be of any practical use to you.
import Control.Arrow ((&&&))
import Data.Ratio (numerator, denominator, (%))
memoV :: Memo.Memo a -> Memo.Memo (V a)
memoV m f = \(V x y z) -> table x y z
where g x y z = f (V x y z)
table = Memo.memo3 m m m g
memoRealFrac :: RealFrac a => Memo.Memo a
memoRealFrac f = Memo.wrap (fromRational . uncurry (%))
((numerator &&& denominator) . toRational)
Memo.integral
A different approach.
You have
step :: (V Double -> V Double, V Double -> V Double)
-> (V Double -> V Double, V Double -> V Double)
How about you change that to
step :: (V Double -> (V Double, V Double))
-> (V Double -> (V Double, V Double))
step h x = (r fx gx, s fx gx)
where (fx, gx) = h x
And also change
foo = (fst . bar, snd . bar)
where bar = iterate step (f0 &&& g0)
Hopefully the shared fx and gx should result in a bit of a speed-up.
Is there any way to memoize this without ripping the whole program apart (e.g. evaluating f0 and g0 on all relevant arguments and then working upward)?
This may be what you mean by "ripping the whole program apart", but here is a solution in which (I believe but can't test ATM) fooX can be shared.
nthFooOnX :: Integer -> Int -> (Integer, Integer)
nthFooOnX x =
let fooX = iterate step' (f0 x, g0 x)
in \n-> fooX !! n
step' (fx,gx) = (r fx gx, s fx gx)
-- testing definitions:
r = (+)
s = (*)
f0 = (+1)
g0 = (+1)
I don't know if that preserves the spirit of your original implementation.
Is it possible to write the Y Combinator in Haskell?
It seems like it would have an infinitely recursive type.
Y :: f -> b -> c
where f :: (f -> b -> c)
or something. Even a simple slightly factored factorial
factMaker _ 0 = 1
factMaker fn n = n * ((fn fn) (n -1)
{- to be called as
(factMaker factMaker) 5
-}
fails with "Occurs check: cannot construct the infinite type: t = t -> t2 -> t1"
(The Y combinator looks like this
(define Y
(lambda (X)
((lambda (procedure)
(X (lambda (arg) ((procedure procedure) arg))))
(lambda (procedure)
(X (lambda (arg) ((procedure procedure) arg)))))))
in scheme)
Or, more succinctly as
(λ (f) ((λ (x) (f (λ (a) ((x x) a))))
(λ (x) (f (λ (a) ((x x) a))))))
For the applicative order
And
(λ (f) ((λ (x) (f (x x)))
(λ (x) (f (x x)))))
Which is just a eta contraction away for the lazy version.
If you prefer short variable names.
Here's a non-recursive definition of the y-combinator in haskell:
newtype Mu a = Mu (Mu a -> a)
y f = (\h -> h $ Mu h) (\x -> f . (\(Mu g) -> g) x $ x)
hat tip
The Y combinator can't be typed using Hindley-Milner types, the polymorphic lambda calculus on which Haskell's type system is based. You can prove this by appeal to the rules of the type system.
I don't know if it's possible to type the Y combinator by giving it a higher-rank type. It would surprise me, but I don't have a proof that it's not possible. (The key would be to identify a suitably polymorphic type for the lambda-bound x.)
If you want a fixed-point operator in Haskell, you can define one very easily because in Haskell, let-binding has fixed-point semantics:
fix :: (a -> a) -> a
fix f = f (fix f)
You can use this in the usual way to define functions and even some finite or infinite data structures.
It is also possible to use functions on recursive types to implement fixed points.
If you're interested in programming with fixed points, you want to read Bruce McAdam's technical report That About Wraps it Up.
The canonical definition of the Y combinator is as follows:
y = \f -> (\x -> f (x x)) (\x -> f (x x))
But it doesn't type check in Haskell because of the x x, since it would require an infinite type:
x :: a -> b -- x is a function
x :: a -- x is applied to x
--------------------------------
a = a -> b -- infinite type
If the type system were to allow such recursive types, it would make type checking undecidable (prone to infinite loops).
But the Y combinator will work if you force it to typecheck, e.g. by using unsafeCoerce :: a -> b:
import Unsafe.Coerce
y :: (a -> a) -> a
y = \f -> (\x -> f (unsafeCoerce x x)) (\x -> f (unsafeCoerce x x))
main = putStrLn $ y ("circular reasoning works because " ++)
This is unsafe (obviously). rampion's answer demonstrates a safer way to write a fixpoint combinator in Haskell without using recursion.
Oh
this wiki page and
This Stack Overflow answer seem to answer my question.
I will write up more of an explanation later.
Now, I've found something interesting about that Mu type. Consider S = Mu Bool.
data S = S (S -> Bool)
If one treats S as a set and that equals sign as isomorphism, then the equation becomes
S ⇋ S -> Bool ⇋ Powerset(S)
So S is the set of sets that are isomorphic to their powerset!
But we know from Cantor's diagonal argument that the cardinality of Powerset(S) is always strictly greater than the cardinality of S, so they are never isomorphic.
I think this is why you can now define a fixed point operator, even though you can't without one.
Just to make rampion's code more readable:
-- Mu :: (Mu a -> a) -> Mu a
newtype Mu a = Mu (Mu a -> a)
w :: (Mu a -> a) -> a
w h = h (Mu h)
y :: (a -> a) -> a
y f = w (\(Mu x) -> f (w x))
-- y f = f . y f
in which w stands for the omega combinator w = \x -> x x, and y stands for the y combinator y = \f -> w . (f w).
Edit II: Ah, okay: I wasn't understanding how a and b were being bound in the definition of eval! Now I do. If anyone's interested, this is a diagram tracking a and b. I'm a pretty big fan of diagrams. Drawing arrows really improved my Haskell, I swear.
A Diagram of an eval call (PDF)
Sometimes I feel really dense.
In section 2.8 of Wadler's "Monads for Functional Programming," he introduces state into a simple evaluation function. The original (non-monadic) function tracks state using a series of let expressions, and is easy to follow:
data Term = Con Int | Div Term Term
deriving (Eq, Show)
type M a = State -> (a, State)
type State = Int
eval' :: Term -> M Int
eval' (Con a) x = (a, x)
eval' (Div t u) x = let (a, y) = eval' t x in
let (b, z) = eval' u y in
(a `div` b, z + 1)
The definitions of unit and bind for the monadic evaluator are similarly straightforward:
unit :: a -> M a
unit a = \x -> (a, x)
(>>=) :: M a -> (a -> M b) -> M b
m >>= k = \x -> let (a, y) = m x in
let (b, z) = k a y in
(b, z)
Here, (>>=) accepts a monadic value m :: M a, a function k :: a -> M b, and outputs a monadic value M b. The value of m is dependent on the value substituted for x in the lambda expression.
Wadler then introduces the function tick:
tick :: M ()
tick = \x -> ((), x + 1)
Again, straightforward. What isn't straightforward, however, is how to chain these functions together to produce an evaluation function that returns the number of division operators performed. Specifically, I don't understand:
(1) How tick is implemented. For instance, the following is a valid function call:
(tick >>= \() -> unit (div 4 2)) 0
~> (2, 1)
However, I can't evaluate it correctly by hand (indicating that I misunderstand something). In particular: (a) The result of evaluating tick at 0 is ((), 0), so How does the lambda expression accept ()? (b) If a is the first element of the pair returned by calling tick at 0, how does unit get evaluated?
(2) How to combine tick and unit to track the number of division operators performed. While the non-monadic evaluator is not problematic, the use of bind is confusing me here.
Edit: Thanks, everybody. I think my misunderstanding was the role of the lambda expression, '() -> unit (div 4 2)'. If I understanding it correctly,
(tick >>= (\() -> unit (div m n)) x
expands to
(\x -> let (a, y) = tick x in
let (b, z) = (\() -> unit (div m n) a y) in
(b, z)) x
When 'a' is applied to '() -> unit (div m n) a y', no 'practical result' is yielded. The same effect could be achieved by binding any variable with a lambda operator, and substituting a value for it. The versatility of bind, in this case, is that any value M a can be passed to it. As noted, a value M a represents a computation, for instance, 'eval.' Hence:
eval (Con a) = unit a
eval (Div t u) = eval t >>= (\a ->
eval u >>= (\b ->
tick >>= (\c -> unit (a `div` b))))
If I understand correctly, 'eval t' is substituted for m and the remainder of the expression, the function
'(\a -> eval u >>= (\b -> tick >>= (\c -> unit (a `div` b))))'
is substituted for k. The result of evaluating 'eval t' is bound to (a, y), and the result of evaluating k is bound to (b, z). I have a ways to go, but this clears it up somewhat. Thanks.
You can evaluate the expression by hand like this:
(tick >>= \() -> unit (div 4 2)) 0
If you insert tick and \() -> unit (div 4 2) into the definition of >>=, this becomes:
(\x -> let (a, y) = tick x in
let (b, z) = (\() -> unit (div 4 2)) a y in
(b, z)) 0
If you now apply the function by substituting 0 for x, you get:
let (a, y) = tick 0 in
let (b, z) = (\() -> unit (div 4 2)) a y in
(b, z)
Now let's apply tick to 0:
let (a, y) = ((), 0 + 1) in
let (b, z) = (\() -> unit (div 4 2)) a y in
(b, z)
So a becomes () and y becomes 0+1 which is 1. So we have
let (b, z) = (\() -> unit (div 4 2)) () 1 in
(b, z)
If we apply the function to () we get
let (b,z) = unit (div 4 2) 1 in
(b,z)
If we apply unit, we get
let (b,z) = (div 4 2, 1) in
(b,z)
div 4 2 is 2, so the result is (2,1).
1a)
The result of evaluating tick at 0 is ((), 1) -- look at the code again, it increments the input value by one.
The lambda expression accepts () because it is the right-hand side of the bind operation, meaning its type is expected to be (() -> M b). So it takes the () as its first parameter, then uses "unit" as the M b item.
1b)
I'm not quite sure what you're asking here. The bind operator is defined to pass the result and state from the first operation (which is () and 1 respectively) into the second operation, so unit ends up being passed 1 as the current state (the result, (), was swallowed by the lambda expression). The current state is kept as-is by the unit function, and the result is the result of 4 div 2, i.e. 2.
2)
Presumably you will want some function of the type:
divCounted :: Int -> Int -> M Int
Which either combines tick and unit (similar to how you have), making sure to tick once to increase the count, and use unit to give back the result.
1a) The result of evaluating tick at 0 is ((), 1), so How does the lambda expression accept ()?
The key about the state monad is that bind takes care of the second component of the pair, the state. The lambda expression only needs to handle the (), the first component of the pair, the return value.
In general, the key about the monad M is that it abstracts the whole business of threading the state away. You should think of a value of type M a as a computer program that returns a value of type a while also messing with the state. The insight is that two operations, unit and >>=, are enough to write any such program; the entire business of constructing and deconstructing pairs (a,s) can be captured in those two functions.