Concise way to conditionally update map in State monad - haskell

Below is the code from an answer regarding memoization, showing a memoization function used in the State monad, where the state is updated with the result of the passed function if the key is not already in the map.
type MyMemo a b = State (Map.Map a b) b
myMemo :: Ord a => (a -> MyMemo a b) -> a -> MyMemo a b
myMemo f x = do
map <- get
case Map.lookup x map of
Just y -> return y
Nothing -> do
y <- f x
modify $ \map' -> Map.insert x y map'
return y
It doesn't seem like idiomatic Haskell: it feels very imperative, with not really that much going on per line.
Is there a way to do the above, but in a more concise/functional style? I've had a look around the functions available at http://hackage.haskell.org/package/transformers-0.5.4.0/docs/Control-Monad-Trans-State-Lazy.html#v:state, but nothing really seems helpful.

I think your code is in functional style, but you can bit simplify it.
myMemo f x = maybe work return =<< gets (Map.lookup x)
where
work = do
y <- f x
modify $ Map.insert x y
return y

This is alternative that uses mapState, as well as >>= and maybe from https://stackoverflow.com/a/44515364/1319998, that avoids all the do notation
myMemo f x = gets (Map.lookup x) >>= maybe y' return
where
y' = mapState (\(y, map) -> (y, Map.insert x y map)) $ f x

This is an alternative that expands on https://stackoverflow.com/a/44515364/1319998, using more >>= that avoids all the do notation
myMemo :: Ord a => (a -> MyMemo a b) -> a -> MyMemo a b
myMemo f x = gets (Map.lookup x) >>= maybe y' return
where
y' = f x >>= \y -> state $ \map -> (y, Map.insert x y map)

This is an alternative that expands on https://stackoverflow.com/a/44515364/1319998, essentially de-sugaring the do-notation
myMemo f x = gets (Map.lookup x) >>= maybe y' return
where
y' = f x >>= \y -> modify (Map.insert x y) >> return y

Related

Can Recursive-Do Be Desugared

Can a recursive-do statement be desugared to a series of >>= statements? If so, what does the reverse state monad's definition for >>= look like when it's desugared?
instance MonadFix m => Monad (StateT s m) where
return x = ...
m >>= f = StateT $ \s -> do
rec
(x, s'') <- runStateT m s'
(x', s') <- runStateT (f x) s
return (x', s'')
Recursive-do desugars not only to a series of >>= calls, but also, as long as there is actually recursion, into an mfix call. It is in the mfix call that the whole recursiveness happens, via what is technically termed "magic fairy dust".
Seriously though, how it happens is different for every monad, and that's why it's a class MonadFix rather than just a function. But the important point is that it can "magically" pass you your own result as a parameter, which is only possible due to Haskell's laziness, and therefore must be handled with care.
In general, something like this:
do
rec
x <- f y
y <- g x
return $ h x y
Desugars into this:
mfix (\ ~(x, y) -> do
x' <- f y
y' <- g x'
return (x', y')
)
>>= (\(x, y) -> h x y)
So applying to the reverse state definition, it would look like this:
m >>= f = StateT $ \s ->
mfix (\ ~((x, s''), (x',s')) -> do
(x0, s0'') <- runStateT m s'
(x0', s0') <- runStateT (f x0) s
return ((x0, s0''), (x0', x0'))
)
>>= (\(x, s''), (x',s') -> return (x', s''))
And from here, we can just desugar the regular do as usual:
m >>= f = StateT $ \s ->
mfix (\ ~((x, s''), (x',s')) ->
runStateT m s' >>= \(x0, s0'') ->
runStateT (f x0) s >>= \(x0', s0') ->
return ((x0, s0''), (x0', x0'))
)
>>= (\(x, s''), (x',s') -> return (x', s''))
(that is, unless I messed up something - a lot of ticks flying around :-)

Notational confusion in a basic type declaration exercise

I'm going through the new book Haskell Programming from First Principles. It seems decent, but I feel that there are some confusing holes in the explanations. I apologize if I'm missing something basic.
The last problem in chapter 5 is to fill in the ??? below so that things make sense:
munge :: (x -> y) -> (y -> (w, z)) -> x -> w
munge = ???
The solution which was explained to me (after much head-scratching) goes:
g :: y -> (w, z)
g = undefined
f :: x -> y
f = undefined
munge :: (x -> y) -> (y -> (w, z)) -> x -> w
munge g f v = fst (g (f v))
I'm getting hung up on this example in two ways.
First, it seems like the munge function ought to take a function as input which takes x -> y. But the way munge is defined, it seems like we supply an additional argument v to the function f first. But if f :: x -> y, then won't the expression f v be of type just y instead of x -> y?
Second, I'm struggling to understand why the x appears in the second-to-last position in the type declaration. At that point I feel like the logical next piece after the (y -> (w,x)) step should just be w, since at that stage the function g is being applied to fst and w ought to be the type of what fst returns. I can feel that I'm close, but can't quite close the gap.
Clearly I'm not understanding the notation correctly. Can anyone set me straight?
EDIT: Ok, here is a clarifying question to the second part. Is it possible to revise the munge function so that it has the following type (i.e. original type with second-to-last x application omitted)? If so what would it look like?
munge :: (x -> y) -> (y -> (w, z)) -> w
The answer is incorrect and ill-typed. f and g should be swapped:
munge :: (x -> y) -> (y -> (w, z)) -> x -> w
munge g f v = fst (f (g v))
I'm not sure if that clears up your confusion.
EDIT In case it's interesting, here are more equivalent ways of writing this function and its type:
-- notice parens in type signature; `->` associates right
munge :: (x -> y) -> ((y -> (w, z)) -> (x -> w))
munge g f v = -- omitted
-- type signature omitted
munge boop _plort zOWY = fst (_plort (boop zOWY))
munge g f = fst . f . g
munge g = \f v -> fst . f . g $ v
-- don't do this please
munge = ((fst .) .) . (.)
EDIT2 It might be helpful to play around with this in GHCi, asking the inferred type of different expressions:
Prelude> let munge g f v = fst (f (g v))
Prelude> :t munge
munge :: (t1 -> t) -> (t -> (a, b)) -> t1 -> a
Prelude> :t munge head
munge head :: (t -> (a, b)) -> [t] -> a
Prelude> :t munge head (\x-> (x, not x))
munge head (\x-> (x, not x)) :: [Bool] -> Bool
Prelude> :t munge ((+1) . fst . snd . head)
munge ((+1) . fst . snd . head)
:: Num t => (t -> (a, b)) -> [(a1, (t, b1))] -> a
The solution, confusingly, is using the same variables f and g for two different things: as global names for two functions, and as parameter names in defining munge. Making a change of variable should make it clearer:
g :: y -> (w, z)
g = undefined
f :: x -> y
f = undefined
munge :: (x -> y) -> (y -> (w, z)) -> x -> w
munge f1 f2 v = fst (f2 (f1 v)) -- fst . f2 . f1 $ v
Then you would call munge on f and g will thing like
munge f g someArgumentForF
Inside munge, f (called f1) is first applied to someArgumentForF (called v) to get a value that can be passed to g (called f2). This produces a tuple, and applying fst to the tuple returns the value of type w needed as the final result.

Is there a way to elegantly represent this pattern in Haskell?

Mind the pure function below, in an imperative language:
def foo(x,y):
x = f(x) if a(x)
if c(x):
x = g(x)
else:
x = h(x)
x = f(x)
y = f(y) if a(y)
x = g(x) if b(y)
return [x,y]
That function represents a style where you have to incrementally update variables. It can be avoided in most cases, but there are situations where that pattern is unavoidable - for example, writing a cooking procedure for a robot, which inherently requires a series of steps and decisions. Now, imagine we were trying to represent foo in Haskell.
foo x0 y0 =
let x1 = if a x0 then f x0 else x0 in
let x2 = if c x1 then g x1 else h x1 in
let x3 = f x2 in
let y1 = if a y0 then f y0 else y0 in
let x4 = if b y1 then g x3 else x3 in
[x4,y1]
That code works, but it is too complicated and error prone due to the need for manually managing the numeric tags. Notice that, after x1 is set, x0's value should never be used again, but it still can. If you accidentally use it, that will be an undetected error.
I've managed to solve this problem using the State monad:
fooSt x y = execState (do
(x,y) <- get
when (a x) (put (f x, y))
(x,y) <- get
if c x
then put (g x, y)
else put (h x, y)
(x,y) <- get
put (f x, y)
(x,y) <- get
when (a y) (put (x, f y))
(x,y) <- get
when (b y) (put (g x, x))) (x,y)
This way, need for tag-tracking goes away, as well as the risk of accidentally using an outdated variable. But now the code is verbose and much harder to understand, mainly due to the repetition of (x,y) <- get.
So: what is a more readable, elegant and safe way to express this pattern?
Full code for testing.
Your goals
While the direct transformation of imperative code would usually lead to the ST monad and STRef, lets think about what you actually want to do:
You want to manipulate values conditionally.
You want to return that value.
You want to sequence the steps of your manipulation.
Requirements
Now this indeed looks first like the ST monad. However, if we follow the simple monad laws, together with do notation, we see that
do
x <- return $ if somePredicate x then g x
else h x
x <- return $ if someOtherPredicate x then a x
else b x
is exactly what you want. Since you need only the most basic functions of a monad (return and >>=), you can use the simplest:
The Identity monad
foo x y = runIdentity $ do
x <- return $ if a x then f x
else x
x <- return $ if c x then g x
else h x
x <- return $ f x
y <- return $ if a x then f y
else y
x <- return $ if b y then g x
else y
return (x,y)
Note that you cannot use let x = if a x then f x else x, because in this case the x would be the same on both sides, whereas
x <- return $ if a x then f x
else x
is the same as
(return $ if a x then (f x) else x) >>= \x -> ...
and the x in the if expression is clearly not the same as the resulting one, which is going to be used in the lambda on the right hand side.
Helpers
In order to make this more clear, you can add helpers like
condM :: Monad m => Bool -> a -> a -> m a
condM p a b = return $ if p then a else b
to get an even more concise version:
foo x y = runIdentity $ do
x <- condM (a x) (f x) x
x <- fmap f $ condM (c x) (g x) (h x)
y <- condM (a y) (f y) y
x <- condM (b y) (g x) x
return (x , y)
Ternary craziness
And while we're up to it, lets crank up the craziness and introduce a ternary operator:
(?) :: Bool -> (a, a) -> a
b ? ie = if b then fst ie else snd ie
(??) :: Monad m => Bool -> (a, a) -> m a
(??) p = return . (?) p
(#) :: a -> a -> (a, a)
(#) = (,)
infixr 2 ??
infixr 2 #
infixr 2 ?
foo x y = runIdentity $ do
x <- a x ?? f x # x
x <- fmap f $ c x ?? g x # h x
y <- a y ?? f y # y
x <- b y ?? g x # x
return (x , y)
But the bottomline is, that the Identity monad has everything you need for this task.
Imperative or non-imperative
One might argue whether this style is imperative. It's definitely a sequence of actions. But there's no state, unless you count the bound variables. However, then a pack of let … in … declarations also gives an implicit sequence: you expect the first let to bind first.
Using Identity is purely functional
Either way, the code above doesn't introduce mutability. x doesn't get modified, instead you have a new x or y shadowing the last one. This gets clear if you desugar the do expression as noted above:
foo x y = runIdentity $
a x ?? f x # x >>= \x ->
c x ?? g x # h x >>= \x ->
return (f x) >>= \x ->
a y ?? f y # y >>= \y ->
b y ?? g x # x >>= \x ->
return (x , y)
Getting rid of the simplest monad
However, if we would use (?) on the left hand side and remove the returns, we could replace (>>=) :: m a -> (a -> m b) -> m b) by something with type a -> (a -> b) -> b. This just happens to be flip ($). We end up with:
($>) :: a -> (a -> b) -> b
($>) = flip ($)
infixr 0 $> -- same infix as ($)
foo x y = a x ? f x # x $> \x ->
c x ? g x # h x $> \x ->
f x $> \x ->
a y ? f y # y $> \y ->
b y ? g x # x $> \x ->
(x, y)
This is very similar to the desugared do expression above. Note that any usage of Identity can be transformed into this style, and vice-versa.
The problem you state looks like a nice application for arrows:
import Control.Arrow
if' :: (a -> Bool) -> (a -> a) -> (a -> a) -> a -> a
if' p f g x = if p x then f x else g x
foo2 :: (Int,Int) -> (Int,Int)
foo2 = first (if' c g h . if' a f id) >>>
first f >>>
second (if' a f id) >>>
(\(x,y) -> (if b y then g x else x , y))
in particular, first lifts a function a -> b to (a,c) -> (b,c), which is more idiomatic.
Edit: if' allows a lift
import Control.Applicative (liftA3)
-- a functional if for lifting
if'' b x y = if b then x else y
if' :: (a -> Bool) -> (a -> a) -> (a -> a) -> a -> a
if' = liftA3 if''
I'd probably do something like this:
foo x y = ( x', y' )
where x' = bgf y' . cgh . af $ x
y' = af y
af z = (if a z then f else id) z
cgh z = (if c z then g else h) z
bg y x = (if b y then g else id) x
For something more complicated, you may want to consider using lens:
whenM :: Monad m => m Bool -> m () -> m ()
whenM c a = c >>= \res -> when res a
ifM :: Monad m => m Bool -> m a -> m a -> m a
ifM mb ml mr = mb >>= \b -> if b then ml else mr
foo :: Int -> Int -> (Int, Int)
foo = curry . execState $ do
whenM (uses _1 a) $
_1 %= f
ifM (uses _1 c)
(_1 %= g)
(_1 %= h)
_1 %= f
whenM (uses _2 a) $
_2 %= f
whenM (uses _2 b) $ do
_1 %= g
And there's nothing stopping you from using more descriptive variable names:
foo :: Int -> Int -> (Int, Int)
foo = curry . execState $ do
let x :: Lens (a, c) (b, c) a b
x = _1
y :: Lens (c, a) (c, b) a b
y = _2
whenM (uses x a) $
x %= f
ifM (uses x c)
(x %= g)
(x %= h)
x %= f
whenM (uses y a) $
y %= f
whenM (uses y b) $ do
x %= g
This is a job for the ST (state transformer) library.
ST provides:
Stateful computations in the form of the ST type. These look like ST s a for a computation that results in a value of type a, and may be run with runST to obtain a pure a value.
First-class mutable references in the form of the STRef type. The newSTRef a action creates a new STRef s a reference with an initial value of a, and which can be read with readSTRef ref and written with writeSTRef ref a. A single ST computation can use any number of STRef references internally.
Together, these let you express the same mutable variable functionality as in your imperative example.
To use ST and STRef, we need to import:
{-# LANGUAGE NoMonomorphismRestriction #-}
import Control.Monad.ST.Safe
import Data.STRef
Instead of using the low-level readSTRef and writeSTRef all over the place, we can define the following helpers to match the imperative operations that the Python-style foo example uses:
-- STRef assignment.
(=:) :: STRef s a -> ST s a -> ST s ()
ref =: x = writeSTRef ref =<< x
-- STRef function application.
($:) :: (a -> b) -> STRef s a -> ST s b
f $: ref = f `fmap` readSTRef ref
-- Postfix guard syntax.
if_ :: Monad m => m () -> m Bool -> m ()
action `if_` guard = act' =<< guard
where act' b = if b then action
else return ()
This lets us write:
ref =: x to assign the value of ST computation x to the STRef ref.
(f $: ref) to apply a pure function f to the STRef ref.
action `if_` guard to execute action only if guard results in True.
With these helpers in place, we can faithfully translate the original imperative definition of foo into Haskell:
a = (< 10)
b = even
c = odd
f x = x + 3
g x = x * 2
h x = x - 1
f3 x = x + 2
-- A stateful computation that takes two integer STRefs and result in a final [x,y].
fooST :: Integral n => STRef s n -> STRef s n -> ST s [n]
fooST x y = do
x =: (f $: x) `if_` (a $: x)
x' <- readSTRef x
if c x' then
x =: (g $: x)
else
x =: (h $: x)
x =: (f $: x)
y =: (f $: y) `if_` (a $: y)
x =: (g $: x) `if_` (b $: y)
sequence [readSTRef x, readSTRef y]
-- Pure wrapper: simply call fooST with two fresh references, and run it.
foo :: Integral n => n -> n -> [n]
foo x y = runST $ do
x' <- newSTRef x
y' <- newSTRef y
fooST x' y'
-- This will print "[9,3]".
main = print (foo 0 0)
Points to note:
Although we first had to define some syntactical helpers (=:, $:, if_) before translating foo, this demonstrates how you can use ST and STRef as a foundation to grow your own little imperative language that's directly suited to the problem at hand.
Syntax aside, this matches the structure of the original imperative definition exactly, without any error-prone restructuring. Any minor changes to the original example can be mirrored directly to Haskell. (The addition of the temporary x' <- readSTRef x binding in the Haskell code is only in order to use it with the native if/else syntax: if desired, this can be replaced with an appropriate ST-based if/else construct.)
The above code demonstrates giving both pure and stateful interfaces to the same computation: pure callers can use foo without knowing that it uses mutable state internally, while ST callers can directly use fooST (and for example provide it with existing STRefs to modify).
#Sibi said it best in his comment:
I would suggest you to stop thinking imperatively and rather think in a functional way. I agree that it will take some time to getting used to the new pattern, but try to translate imperative ideas to functional languages isn't a great approach.
Practically speaking, your chain of let can be a good starting point:
foo x0 y0 =
let x1 = if a x0 then f x0 else x0 in
let x2 = if c x1 then g x1 else h x1 in
let x3 = f x2 in
let y1 = if a y0 then f y0 else y0 in
let x4 = if b y1 then g x3 else x3 in
[x4,y1]
But I would suggest using a single let and giving descriptive names to the intermediate stages.
In this example unfortunately I don't have a clue what the various x's and y's do, so I cannot suggest meaningful names. In real code you would use names such as x_normalized, x_translated, or such, instead of x1 and x2, to describe what those values really are.
In fact, in a let or where you don't really have variables: they're just shorthand names you give to intermediate results, to make it easy to compose the final expression (the one after in or before the where.)
This is the spirit behind the x_bar and x_baz below. Try to come up with names that are reasonably descriptive, given the context of your code.
foo x y =
let x_bar = if a x then f x else x
x_baz = f if c x_bar then g x_bar else h x_bar
y_bar = if a y then f y else y
x_there = if b y_bar then g x_baz else x_baz
in [x_there, y_bar]
Then you can start recognizing patterns that were hidden in the imperative code. For example, x_bar and y_bar are basically the same transformation, applied respectively to x and y: that's why they have the same suffix "_bar" in this nonsensical example; then your x2 probably doesn't need an intermediate name , since you can just apply f to the result of the entire "if c then g else h".
Going on with the pattern recognition, you should factor out the transformations that you are applying to variables into sub-lambdas (or whatever you call the auxiliary functions defined in a where clause.)
Again, I don't have a clue what the original code did, so I cannot suggest meaningful names for the auxiliary functions. In a real application, f_if_a would be called normalize_if_needed or thaw_if_frozen or mow_if_overgrown... you get the idea:
foo x y =
let x_bar = f_if_a x
y_bar = f_if_a y
x_baz = f (g_if_c_else_h x_bar)
x_there = g_if_b x_baz y_bar
in [x_there, y_bar]
where
f_if_a x
| a x = f x
| otherwise = x
g_if_c_else_h x
| c x = g x
| otherwise = h x
g_if_b x y
| b y = g x
| otherwise = x
Don't disregard this naming business.
The whole point of Haskell and other pure functional languages is to express algorithms without the assignment operator, meaning the tool that can modify the value of an existing variable.
The names you give to things inside a function definition, whether introduced as arguments, let, or where, can only refer to one value (or auxiliary function) throughout the entire definition, so that your code can be more easily reasoned about and proven correct.
If you don't give them meaningful names (and conversely giving your code a meaningful structure) then you're missing out on the entire purpose of Haskell.
(IMHO the other answers so far, citing monads and other shenanigans, are barking up the wrong tree.)
I always prefer layering state transformers to using a single state over a tuple: it definitely declutters things by letting you "focus" on a specific layer (representations of the x and y variables in our case):
import Control.Monad.Trans.Class
import Control.Monad.Trans.State
foo :: x -> y -> (x, y)
foo x y =
(flip runState) y $ (flip execStateT) x $ do
get >>= \v -> when (a v) (put (f v))
get >>= \v -> put ((if c v then g else h) v)
modify f
lift $ get >>= \v -> when (a v) (put (f v))
lift get >>= \v -> when (b v) (modify g)
The lift function allows us to focus on the inner state layer, which is y.

Writing Category Instance for custom Lens

I have been reading this article for understanding Lenses. I know this is different from
Edward Knett's lens package, but nonetheless it's useful for fundamentals.
So, A Lens is defined like this:
type Lens a b = (a -> b, b -> a -> a)
It has been mentioned that Lenses form a category and I have been
trying out to create an instance for Category typeclass. For a start, I
wrote the type definition for the functions:
(.) :: Lens y z -> Lens x y -> Lens x z
id :: Lens x x
And after this, I just stare it for all day. What exactly is the
thought process for writing it's definition?
I found this article (Lenses from Scratch on fpcomplete by Joseph Abrahamson) to be very good, it starts from the same representation of lenses you started with, defines composition for it and continues along the path to a representation more similar to lens
EDIT: I find type holes to be excellent when doing this kind of things:
(<.>):: Lens y z -> Lens x y -> Lens x z
(getA,setA) <.> (getB,setB) = (_,_)
So now we have 2 holes, the first in the tuple says (output cleaned):
Found hole ‘_’ with type: x -> z
...
Relevant bindings include
setB :: y -> x -> x
getB :: x -> y
setA :: z -> y -> y
getA :: y -> z
(<.>) :: Lens y z -> Lens x y -> Lens x z
Looking hard at the bindings, we already have what we need! getB :: x -> y and getA :: y -> z together with function composition (.) :: (b -> c) -> (a -> b) -> a -> c
So we happily insert this:
(<.>):: Lens y z -> Lens x y -> Lens x z
(getA,setA) <.> (getB,setB) = (getA . getB, _)
And continue with the second type hole, which says:
Found hole ‘_’ with type: z -> x -> x
Relevant bindings include
setB :: y -> x -> x
getB :: x -> y
setA :: z -> y -> y
getA :: y -> z
The most similar thing we have is setA :: z -> y -> y, we start by inserting a lambda, capturing the arguments:
(getA,setA) <.> (getB,setB) = (getA . getB, \z x -> _)
changing your type hole to:
Found hole ‘_’ with type: x
Relevant bindings include
x :: x
z :: z
setB :: y -> x -> x
getB :: x -> y
setA :: z -> y -> y
getA :: y -> z
we could insert x which type checks, but does not give us what we want (nothing happens when setting). The only other binding that could give us an x is setB, so we insert that:
(getA,setA) <.> (getB,setB) = (getA . getB, \z x -> setB _ _)
Our first type hole says:
Found hole ‘_’ with type: y
Relevant bindings include
x :: x
z :: z
setB :: y -> x -> x
getB :: x -> y
setA :: z -> y -> y
getA :: y -> z
So we need an y, looking at what is in scope, getB can give us a y if we give it a x, which we happen to have, but this would lead us to a useless lens doing nothing again. The alternative is to use setA:
(getA,setA) <.> (getB,setB) = (getA . getB, \z x -> setB (setA _ _) _)
(Speeding things a little up from here on)
Again the first hole wants something of type z which he happen to have as an argument to our lambda:
(getA,setA) <.> (getB,setB) = (getA . getB, \z x -> setB (setA z _) _)
To fill the first type hole of type y we can use getB :: x -> y giving it the argument of our lambda:
(getA,setA) <.> (getB,setB) = (getA . getB, \z x -> setB (setA z (getB x)) _)
Which leaves us with one remaining type hole, which can trivially be replaced by x, leading to the final definition:
(<.>):: Lens y z -> Lens x y -> Lens x z
(getA,setA) <.> (getB,setB) = (getA . getB, \z x -> setB (setA z (getB x)) x)
You can try to define id for yourself, using type holes and hoogle if necessary
Try this:
(.) :: Lens y z -> Lens x y -> Lens x z
(getZfromY , setZinY) . (getYfromX , setYinX) = (getZfromX , setZinX)
where getZfromX someX = ...
setZinX someZ someX = ...
The idea is: combine the two getters to make the new getter, and combine the two setters to make a new setter.
For the identity, think about:
id :: Lens x x
id = (getXfromX , setXinX)
where getXfromX someX = ...
setXinX newX oldX = ...
It seems to be a fairly straighforward process. But also need to check that you get a category - this requires equational reasoning - because, for example, there is at least one more way to implement the setter of id with type x->x->x - only one of those will make a category.
So, let's start with getting functions of the right type.
Lens y z -> Lens x y -> Lens x z ==
(y->z, z->y->y) -> (x->y, y->x->x) -> (x->z, z->x->x)
It seems clear how to get x->z from x->y and y->z - compose. Well, and you have ways to construct new x from old x and new y, and a way to get old y from old x, so if you can construct new y from z and old y, you are done.
(.) (yz, zyy) (xy, yxx) = (yz . xy, \z x -> yxx (zyy z (xy x)) x)
Similarly for id:
Lens x x ==
(x->x, x->x->x)
So
id = (id, const)
So far so good, the types check. Now let's check that we've got a category. There is one law:
f . id = f = id . f
Checking one way (a bit informal, so need to bear in mind that . and id refer to different things in f . id and fg . id):
f . id = (fg, fs) . (id, const) =
(fg . id, \z x -> const (fs z (id x)) x) =
(fg, \z x -> fs z (id x)) = (fg, fs)
Checking the other way:
id . f = (id, const) . (fg, fs) =
(id . fg, \z x -> fs (const z (fg x)) x) =
(fg, \z x -> fs z x) = (fg, fs)

Y Combinator in Haskell

Is it possible to write the Y Combinator in Haskell?
It seems like it would have an infinitely recursive type.
Y :: f -> b -> c
where f :: (f -> b -> c)
or something. Even a simple slightly factored factorial
factMaker _ 0 = 1
factMaker fn n = n * ((fn fn) (n -1)
{- to be called as
(factMaker factMaker) 5
-}
fails with "Occurs check: cannot construct the infinite type: t = t -> t2 -> t1"
(The Y combinator looks like this
(define Y
(lambda (X)
((lambda (procedure)
(X (lambda (arg) ((procedure procedure) arg))))
(lambda (procedure)
(X (lambda (arg) ((procedure procedure) arg)))))))
in scheme)
Or, more succinctly as
(λ (f) ((λ (x) (f (λ (a) ((x x) a))))
(λ (x) (f (λ (a) ((x x) a))))))
For the applicative order
And
(λ (f) ((λ (x) (f (x x)))
(λ (x) (f (x x)))))
Which is just a eta contraction away for the lazy version.
If you prefer short variable names.
Here's a non-recursive definition of the y-combinator in haskell:
newtype Mu a = Mu (Mu a -> a)
y f = (\h -> h $ Mu h) (\x -> f . (\(Mu g) -> g) x $ x)
hat tip
The Y combinator can't be typed using Hindley-Milner types, the polymorphic lambda calculus on which Haskell's type system is based. You can prove this by appeal to the rules of the type system.
I don't know if it's possible to type the Y combinator by giving it a higher-rank type. It would surprise me, but I don't have a proof that it's not possible. (The key would be to identify a suitably polymorphic type for the lambda-bound x.)
If you want a fixed-point operator in Haskell, you can define one very easily because in Haskell, let-binding has fixed-point semantics:
fix :: (a -> a) -> a
fix f = f (fix f)
You can use this in the usual way to define functions and even some finite or infinite data structures.
It is also possible to use functions on recursive types to implement fixed points.
If you're interested in programming with fixed points, you want to read Bruce McAdam's technical report That About Wraps it Up.
The canonical definition of the Y combinator is as follows:
y = \f -> (\x -> f (x x)) (\x -> f (x x))
But it doesn't type check in Haskell because of the x x, since it would require an infinite type:
x :: a -> b -- x is a function
x :: a -- x is applied to x
--------------------------------
a = a -> b -- infinite type
If the type system were to allow such recursive types, it would make type checking undecidable (prone to infinite loops).
But the Y combinator will work if you force it to typecheck, e.g. by using unsafeCoerce :: a -> b:
import Unsafe.Coerce
y :: (a -> a) -> a
y = \f -> (\x -> f (unsafeCoerce x x)) (\x -> f (unsafeCoerce x x))
main = putStrLn $ y ("circular reasoning works because " ++)
This is unsafe (obviously). rampion's answer demonstrates a safer way to write a fixpoint combinator in Haskell without using recursion.
Oh
this wiki page and
This Stack Overflow answer seem to answer my question.
I will write up more of an explanation later.
Now, I've found something interesting about that Mu type. Consider S = Mu Bool.
data S = S (S -> Bool)
If one treats S as a set and that equals sign as isomorphism, then the equation becomes
S ⇋ S -> Bool ⇋ Powerset(S)
So S is the set of sets that are isomorphic to their powerset!
But we know from Cantor's diagonal argument that the cardinality of Powerset(S) is always strictly greater than the cardinality of S, so they are never isomorphic.
I think this is why you can now define a fixed point operator, even though you can't without one.
Just to make rampion's code more readable:
-- Mu :: (Mu a -> a) -> Mu a
newtype Mu a = Mu (Mu a -> a)
w :: (Mu a -> a) -> a
w h = h (Mu h)
y :: (a -> a) -> a
y f = w (\(Mu x) -> f (w x))
-- y f = f . y f
in which w stands for the omega combinator w = \x -> x x, and y stands for the y combinator y = \f -> w . (f w).

Resources