How to rewrite the following expression in point-free style?
p x y = x*x + y
Using the lambda-calculus I did the following:
p = \x -> \y -> (+) ((*) x x) y
= \x -> (+) ((*) x x) -- here start my problem
= \x -> ((+) . ((*) x )) x
... ?
I asked lambdabot
<Iceland_jack> #pl p x y = x*x + y
<lambdabot> p = (+) . join (*)
join is from Control.Monad and normally has this type
join :: Monad m => m (m a) -> m a
but using instance Monad ((->) x) (if we could left section types this could be written (x ->)) we get the following type / definition
join :: (x -> x -> a) -> (x -> a)
join f x = f x x
Let's ask GHCi to confirm the type:
>> import Control.Monad
>> :set -XTypeApplications
>> :t join #((->) _)
join #((->) _) :: (x -> x -> a) -> x -> a
Since you mentioned Lambda Calculus I will suggest how to solve this with SK combinators. η-reduction was a good try, but as you can tell you can't η-reduce when the variable is used twice.
S = λfgx.fx(gx)
K = λxy.x
The feature of duplication is encoded by S. You simplified your problem to:
λx.(+)((*)xx)
So let us start there. Any lambda term can be algorithmically transformed to a SK term.
T[λx.(+)((*)xx)]
= S(T[λx.(+)])(T[λx.(*)xx]) -- rule 6
= S(K(T[(+)]))(T[λx.(*)xx]) -- rule 3
= S(K(+))(T[λx.(*)xx]) -- rule 1
= S(K(+))(S(T[λx.(*)x])(T[λx.x])) -- rule 6
= S(K(+))(S(*)(T[λx.x])) -- η-reduce
= S(K(+))(S(*)I) -- rule 4
In Haskell, S = (<*>) and K = pure and I = id. Therefore:
= (<*>)(pure(+))((<*>)(*)id)
And rewriting:
= pure (+) <*> ((*) <*> id)
Then we can apply other definitions we know:
= fmap (+) ((*) <*> id) -- pure f <*> x = fmap f x
= fmap (+) (join (*)) -- (<*> id) = join for Monad ((->)a)
= (+) . join (*) -- fmap = (.) for Functor ((->)a)
If you go to http://pointfree.io/
For
p x y = x*x + y
It gives you
p = (+) . join (*)
Just for fun, you can use the State monad to write
p = (+) . uncurry (*) . runState get
runState get simply produces a pair (x, x) from an initial x; get copies the state to the result, and runState returns both the state and that result.
uncurry (*) takes a pair of values rather than 2 separate values ((uncurry (*)) (3, 3) == (*) 3 3 == 9).
Related
I am currently reading Learn You a Haskell for Great Good! and am stumbling on the explanation for the evaluation of a certain code block. I've read the explanations several times and am starting to doubt if even the author understands what this piece of code is doing.
ghci> (+) <$> (+3) <*> (*100) $ 5
508
An applicative functor applies a function in some context to a value in some context to get some result in some context. I have spent a few hours studying this code block and have come up with a few explanations for how this expression is evaluated, and none of them are satisfactory. I understand that (5+3)+(5*100) is 508, but the problem is getting to this expression. Does anyone have a clear explanation for this piece of code?
The other two answers have given the detail of how this is calculated - but I thought I might chime in with a more "intuitive" answer to explain how, without going through a detailed calculation, one can "see" that the result must be 508.
As you implied, every Applicative (in fact, even every Functor) can be viewed as a particular kind of "context" which holds values of a given type. As simple examples:
Maybe a is a context in which a value of type a might exist, but might not (usually the result of a computation which may fail for some reason)
[a] is a context which can hold zero or more values of type a, with no upper limit on the number - representing all possible outcomes of a particular computation
IO a is a context in which a value of type a is available as a result of interacting with "the outside world" in some way. (OK that one isn't so simple...)
And, relevant to this example:
r -> a is a context in which a value of type a is available, but its particular value is not yet known, because it depends on some (as yet unknown) value of type r.
The Applicative methods can be very well understood on the basis of values in such contexts. pure embeds an "ordinary value" in a "default context" in which it behaves as closely as possible in that context to a "context-free" one. I won't go through this for each of the 4 examples above (most of them are very obvious), but I will note that for functions, pure = const - that is, a "pure value" a is represented by the function which always produces a no matter what the source value.
Rather than dwell on how <*> can best be described using the "context" metaphor though, I want to dwell on the particular expression:
f <$> a <*> b
where f is a function between 2 "pure values" and a and b are "values in a context". This expression in fact has a synonym as a function: liftA2. Although using the liftA2 function is generally considered less idiomatic than the "applicative style" using <$> and <*>, the name emphasies that the idea is to "lift" a function on "ordinary values" to one on "values in a context". And when thought of like this, I think it is usually very intuitive what this does, given a particular "context" (ie. a particular Applicative instance).
So the expression:
(+) <$> a <*> b
for values a and b of type say f Int for an Applicative f, behaves as follows for different instances f:
if f = Maybe, then the result, if a and b are both Just values, is to add up the underlying values and wrap them in a Just. If either a or b is Nothing, then the whole expression is Nothing.
if f = [] (the list instance) then the above expression is a list containing all sums of the form a' + b' where a' is in a and b' is in b.
if f = IO, then the above expression is an IO action that performs all the I/O effects of a followed by those of b, and results in the sum of the Ints produced by those two actions.
So what, finally, does it do if f is the function instance? Since a and b are both functions describing how to get a given Int given an arbitrary (Int) input, it is natural that lifting the (+) function over them should be the function that, given an input, gets the result of both the a and b functions, and then adds the results.
And that is, of course, what it does - and the explicit route by which it does that has been very ably mapped out by the other answers. But the reason why it works out like that - indeed, the very reason we have the instance that f <*> g = \x -> f x (g x), which might otherwise seem rather arbitrary (although in actual fact it's one of the very few things, if not the only thing, that will type-check), is so that the instance matches the semantics of "values which depend on some as-yet-unknown other value, according to the given function". And in general, I would say it's often better to think "at a high level" like this than to be forced to go down to the low-level details of exactly how computations are performed. (Although I certainly don't want to downplay the importance of also being able to do the latter.)
[Actually, from a philosophical point of view, it might be more accurate to say that the definition is as it is just because it's the "natural" definition that type-checks, and that it's just happy coincidence that the instance then takes on such a nice "meaning". Mathematics is of course full of just such happy "coincidences" which turn out to have very deep reasons behind them.]
It is using the applicative instance for functions. Your code
(+) <$> (+3) <*> (*100) $ 5
is evaluated as
( (\a->\b->a+b) <$> (\c->c+3) <*> (\d->d*100) ) 5 -- f <$> g
( (\x -> (\a->\b->a+b) ((\c->c+3) x)) <*> (\d->d*100) ) 5 -- \x -> f (g x)
( (\x -> (\a->\b->a+b) (x+3)) <*> (\d->d*100) ) 5
( (\x -> \b -> (x+3)+b) <*> (\d->d*100) ) 5
( (\x->\b->(x+3)+b) <*> (\d->d*100) ) 5 -- f <*> g
(\y -> ((\x->\b->(x+3)+b) y) ((\d->d*100) y)) 5 -- \y -> (f y) (g y)
(\y -> (\b->(y+3)+b) (y*100)) 5
(\y -> (y+3)+(y*100)) 5
(5+3)+(5*100)
where <$> is fmap or just function composition ., and <*> is ap if you know how it behaves on monads.
Let us first take a look how fmap and (<*>) are defined for a function:
instance Functor ((->) r) where
fmap = (.)
instance Applicative ((->) a) where
pure = const
(<*>) f g x = f x (g x)
liftA2 q f g x = q (f x) (g x)
The expression we aim to evaluate is:
(+) <$> (+3) <*> (*100) $ 5
or more verbose:
((+) <$> (+3)) <*> (*100) $ 5
If we thus evaluate (<$>), which is an infix synonym for fmap, we thus see that this is equal to:
(+) . (+3)
so that means our expression is equivalent to:
((+) . (+3)) <*> (*100) $ 5
Next we can apply the sequential application. Here f is thus equal to (+) . (+3) and g is (*100). This thus means that we construct a function that looks like:
\x -> ((+) . (+3)) x ((*100) x)
We can now simplify this and rewrite this into:
\x -> ((+) (x+3)) ((*100) x)
and then rewrite it to:
\x -> (+) (x+3) ((*100) x)
We thus have constructed a function that looks like:
\x -> (x+3) + 100 * x
or simpler:
\x -> 101 * x + 3
If we then calculate:
(\x -> 101*x + 3) 5
then we of course obtain:
101 * 5 + 3
and thus:
505 + 3
which is the expected:
508
For any applicative,
a <$> b <*> c = liftA2 a b c
For functions,
liftA2 a b c x
= a (b x) (c x) -- by definition;
= (a . b) x (c x)
= ((a <$> b) <*> c) x
Thus
(+) <$> (+3) <*> (*100) $ 5
=
liftA2 (+) (+3) (*100) 5
=
(+) ((+3) 5) ((*100) 5)
=
(5+3) + (5*100)
(the long version of this answer follows.)
Pure math has no time. Pure Haskell has no time. Speaking in verbs ("applicative functor applies" etc.) can be confusing ("applies... when?...").
Instead, (<*>) is a combinator which combines a "computation" (denoted by an applicative functor) carrying a function (in the context of that type of computations) and a "computation" of the same type, carrying a value (in like context), into one combined "computation" that carries out the application of that function to that value (in such context).
"Computation" is used to contrast it with a pure Haskell "calculations" (after Philip Wadler's "Calculating is better than Scheming" paper, itself referring to David Turner's Kent Recursive Calculator language, one of predecessors of Miranda, the (main) predecessor of Haskell).
"Computations" might or might not be pure themselves, that's an orthogonal issue. But mainly what it means, is that "computations" embody a generalized function call protocol. They might "do" something in addition to / as part of / carrying out the application of a function to its argument. Or in types,
( $ ) :: (a -> b) -> a -> b
(<$>) :: (a -> b) -> f a -> f b
(<*>) :: f (a -> b) -> f a -> f b
(=<<) :: (a -> f b) -> f a -> f b
With functions, the context is application (another one), and to recover the value -- be it a function or an argument -- the application to a common argument is to be performed.
(bear with me, we're almost there).
The pattern a <$> b <*> c is also expressible as liftA2 a b c. And so, the "functions" applicative functor "computation" type is defined by
liftA2 h x y s = let x' = x s -- embellished application of h to x and y
y' = y s in -- in context of functions, or Reader
h x' y'
-- liftA2 h x y = let x' = x -- non-embellished application, or Identity
-- y' = y in
-- h x' y'
-- liftA2 h x y s = let (x',s') = x s -- embellished application of h to x and y
-- (y',s'') = y s' in -- in context of
-- (h x' y', s'') -- state-passing computations, or State
-- liftA2 h x y = let (x',w) = x -- embellished application of h to x and y
-- (y',w') = y in -- in context of
-- (h x' y', w++w') -- logging computations, or Writer
-- liftA2 h x y = [h x' y' | -- embellished application of h to x and y
-- x' <- x, -- in context of
-- y' <- y ] -- nondeterministic computations, or List
-- ( and for Monads we define `liftBind h x k =` and replace `y` with `k x'`
-- in the bodies of the above combinators; then liftA2 becomes liftBind: )
-- liftA2 :: (a -> b -> c) -> f a -> f b -> f c
-- liftBind :: (a -> b -> c) -> f a -> (a -> f b) -> f c
-- (>>=) = liftBind (\a b -> b) :: f a -> (a -> f b) -> f b
And in fact all the above snippets can be just written with ApplicativeDo as liftA2 h x y = do { x' <- x ; y' <- y ; pure (h x' y') } or even more intuitively as
liftA2 h x y = [h x' y' | x' <- x, y' <- y], with Monad Comprehensions, since all the above computation types are monads as well as applicative functors. This shows by the way that (<*>) = liftA2 ($), which one might find illuminating as well.
Indeed,
> :t let liftA2 h x y r = h (x r) (y r) in liftA2
:: (a -> b -> c) -> (t -> a) -> (t -> b) -> (t -> c)
> :t liftA2 -- the built-in one
liftA2 :: Applicative f => (a -> b -> c) -> f a -> f b -> f c
i.e. the types match when we take f a ~ (t -> a) ~ (->) t a, i.e. f ~ (->) t.
And so, we're already there:
(+) <$> (+3) <*> (*100) $ 5
=
liftA2 (+) (+3) (*100) 5
=
(+) ((+3) 5) ((*100) 5)
=
(+) (5+3) (5*100)
=
(5+3) + (5*100)
It's just how liftA2 is defined for this type, Applicative ((->) t) => ...:
instance Applicative ((->) t) where
pure x t = x
liftA2 h x y t = h (x t) (y t)
There's no need to define (<*>). The source code says:
Minimal complete definition
pure, ((<*>) | liftA2)
So now you've been wanting to ask for a long time, why is it that a <$> b <*> c is equivalent to liftA2 a b c?
The short answer is, it just is. One can be defined in terms of the other -- i.e. (<*>) can be defined via liftA2,
g <*> x = liftA2 id g x -- i.e. (<*>) = liftA2 id = liftA2 ($)
-- (g <*> x) t = liftA2 id g x t
-- = id (g t) (x t)
-- = (id . g) t (x t) -- = (id <$> g <*> x) t
-- = g t (x t)
(which is exactly as it is defined in the source),
and it is a law that every Applicative Functor must follow, that h <$> g = pure h <*> g.
Lastly,
liftA2 h g x == pure h <*> g <*> x
-- h g x == (h g) x
because <*> associates to the left: it is infixl 4 <*>.
I have two simple examples:
1) xt function (what is this?)
Prelude> :t fmap
fmap :: Functor f => (a -> b) -> f a -> f b
Prelude> :{
Prelude| f::Int->Int
Prelude| f x = x
Prelude| :}
Prelude> xt = fmap f // ?
Prelude> :t xt
xt :: Functor f => f Int -> f Int
Prelude> xt (+2) 1
3
2) xq function (via composition)
Prelude> :{
Prelude| return x = [x]
Prelude| :}
Prelude> xq = return . f
Prelude> :t xq
xq :: Int -> [Int]
Prelude> :t return
return :: a -> [a]
xq function I get through composition return(f(x)). But what does that mean: fmap f and what is difference?
The Functor instance for (->) r defines fmap to be function composition:
fmap f g = f . g
Thus, xt (+2) == fmap f (+2) == f . (+2) == (+2) (since f is the identity function for Int). Applied to 1, you get the observed answer 3.
fmap is the function defined by the Functor type class:
class Functor f where
fmap :: (a -> b) -> f a -> f b
It takes a function as its argument and returns a new function "lifted" into the functor in question. The exact definition is supplied by the Functor instance. Above is the definition for the function functor; here for reference are some simpler ones for lists and Maybe:
instance Functor [] where
fmap = map
instance Functor Maybe where
fmap f Nothing = Nothing
fmap f (Just x) = Just (f x)
> fmap (+1) [1,2,3]
[2,3,4]
> fmap (+1) Nothing
Nothing
> fmap (+1) (Just 3)
Just 4
Since you can think of functors as boxes containing one or more values, the intuition for the function functor is that a function is a box containing the result of applying the function to its argument. That is, (+2) is a box that contains some value plus 2. (F)mapping a function on that box provides a box that contains the result of applying f to the result of the original function, i.e, produces a function that is the composition of f with the original function.
Both xq = return . f and xt = fmap f can be eta-expanded:
xq x = (return . f) x = return (f x) = return x
Now it can be eta-contracted:
xq = return
The second is
xt y = fmap f y = fmap (\x -> x) y = fmap id y = id y = y
fmap has type :: Functor f => (a -> b) -> f a -> f b so fmap f has type :: Functor f => f Int -> f Int, because f :: Int -> Int. From its type we see that fmap f is a function, expecting an Int, and producing an Int.
Since f x = x for Ints by definition, it means that f = id for Ints, where id is a predefined function defined just the same way as f is (but in general, for any type).
Then by Functor laws (and that's all we need to know about "Functors" here), fmap id = id and so xt y = y, in other words it's also id - but only for Ints,
xt = id :: Int -> Int
Naturally, xt (+2) = id (+2) = (+2).
Addendum: for something to be a "Functor" means that it can be substituted for f in
fmap id (x :: f a) = x
(fmap g . fmap h) = fmap (g . h)
so that the expressions involved make sense (i.e. are well formed, i.e. have a type), and the above equations hold (they are in fact the two "Functor laws").
Consider the following function:
foo =
[1,2,3] >>=
return . (*2) . (+1)
For better readability and logic, I would like to move my pure functions (*2) and (+1) to the left of the return. I could achieve this like this:
infixr 9 <.
(<.) :: (a -> b) -> (b -> c) -> (a -> c)
(<.) f g = g . f
bar =
[1,2,3] >>=
(+1) <.
(*2) <.
return
However, I don't like the right-associativity of (<.).
Let's introduce a function leftLift:
leftLift :: Monad m => (a -> b) -> a -> m b
leftLift f = return . f
baz =
[1,2,3] >>=
leftLift (+1) >>=
leftLift (*2) >>=
return
I quite like this. Another possibility would be to define a variant of bind:
infixl 1 >>$
(>>$) :: Monad m => m a -> (a -> b) -> m b
(>>$) m f = m >>= return . f
qux =
[1,2,3] >>$
(+1) >>$
(*2) >>=
return
I am not sure whether that is a good idea, since it would not allow me to use do notation should I want that. leftLift I can use with do:
bazDo = do
x <- [1,2,3]
y <- leftLift (+1) x
z <- leftLift (*2) y
return z
I didn't find a function on Hoogle with the signature of leftLift. Does such a function exist, and, if, what is it called? If not, what should I call it? And what would be the most idiomatic way of doing what I am trying to do?
Edit: Here's a version inspired by #dunlop's answer below:
infixl 4 <&>
(<&>) :: Functor f => f a -> (a -> b) -> f b
(<&>) = flip fmap
blah =
[1,2,3] <&>
(+1) <&>
(*2) >>=
return
I should also add that I was after a bind-variant, because I wanted to write my code in point-free style. For do-notation, I guess I don't need to "pretend" that I'm doing anything monadic, so I can use lets.
Every Monad is a Functor (and an Applicative too). Your (>>$) is (flipped) fmap.
GHCi> :t fmap
fmap :: Functor f => (a -> b) -> f a -> f b
GHCi> :t (<$>) -- Infix synonym for 'fmap'
(<$>) -- Infix synonym for 'fmap'
:: Functor f => (a -> b) -> f a -> f b
GHCi> fmap ((*2) . (+1)) [1,2,3]
[4,6,8]
GHCi> (*2) . (+1) <$> ([1,2,3] >>= \x -> [1..x])
[4,4,6,4,6,8]
(By the way, a common name for flipped fmap is (<&>). That is, for instance, what lens calls it.)
If you are using do-notation, there is little reason to use any variant of fmap explicitly for this kind of transformation. Just switch your <- monadic bindings for let-bindings:
bazDo = do
x <- [1,2,3]
let y = (+1) x
z = (*2) y
return z
bazDo = do
x <- [1,2,3]
let y = (+1) x
return ((*2) z)
For better readability...
That's going to be subjective as people disagree on what constitutes readable.
That being said, I agree that sometimes it's easier to understand data transformations when they are written left to right. I think your >>$ is overkill, though. The & operator in Data.Function does the job:
import Data.Function
foo = [1,2,3] & fmap (+1) & fmap (*2)
I like that this says exactly what to start with and exactly what to do at each step from left to right. And unlike >>$, you aren't forced to remain in the monad:
bar = [1,2,3] & fmap (+1) & fmap (*2) & sum & negate
Or you can just assemble your transformation beforehand and map it over your monad:
import Control.Category
f = (+1) >>> (*2)
quuz = fmap f [1,2,3]
I am trying to convert the following Haskell code to point free style, to no avail.
bar f g xs = filter f (map g xs )
I'm new to Haskell and any help would be great.
Converting to pointfree style can be done entirely mechanically, though it's hard without being comfortable with the fundamentals of Haskell syntax like left-associative function application and x + y being the same as (+) x y. I will assume you are comfortable with Haskell syntax; if not, I suggest going through the first few chapters of LYAH first.
You need the following combinators, which are in the standard library. I have also given their standard names from combinator calculus.
id :: a -> a -- I
const :: a -> b -> a -- K
(.) :: (b -> c) -> (a -> b) -> (a -> c) -- B
flip :: (a -> b -> c) -> (b -> a -> c) -- C
(<*>) :: (a -> b -> c) -> (a -> b) -> (a -> c) -- S
Work with one parameter at a time. Move parameters on the left to lambdas on the right, e.g.
f x y = Z
becomes
f = \x -> \y -> Z
I like to do this one argument at a time rather than all at once, it just looks cleaner.
Then eliminate the lambda you just created according to the following rules. I will use lowercase letters for literal variables, uppercase letters to denote more complex expressions.
If you have \x -> x, replace with id
If you have \x -> A, where A is any expression in which x does not occur, replace with const A
If you have \x -> A x, where x does not occur in A, replace with A. This is known as "eta contraction".
If you have \x -> A B, then
If x occurs in both A and B, replace with (\x -> A) <*> (\x -> B).
If x occurs in just A, replace with flip (\x -> A) B
If x occurs in just B, replace with A . (\x -> B),
If x does not occur in either A or B, well, there's another rule we should have used already.
And then work inward, eliminating the lambdas that you created. Lets work with this example:
f x y z = foo z (bar x y)
-- Move parameter to lambda:
f x y = \z -> foo z (bar x y)
-- Remember that application is left-associative, so this is the same as
f x y = \z -> (foo z) (bar x y)
-- z appears on the left and not on the right, use flip
f x y = flip (\z -> foo z) (bar x y)
-- Use rule (3)
f x y = flip foo (bar x y)
-- Next parameter
f x = \y -> flip foo (bar x y)
-- Application is left-associative
f x = \y -> (flip foo) (bar x y)
-- y occurs on the right but not the left, use (.)
f x = flip foo . (\y -> bar x y)
-- Use rule 3
f x = flip foo . bar x
-- Next parameter
f = \x -> flip foo . bar x
-- We need to rewrite this operator into normal application style
f = \x -> (.) (flip foo) (bar x)
-- Application is left-associative
f = \x -> ((.) (flip foo)) (bar x)
-- x appears on the right but not the left, use (.)
f = ((.) (flip foo)) . (\x -> bar x)
-- use rule (3)
f = ((.) (flip foo)) . bar
-- Redundant parentheses
f = (.) (flip foo) . bar
There you go, now try it on yours! There is not really any cleverness involved in deciding which rule to use: use any rule that applies and you will make progress.
Both of the existing answers don't really answer your specific question in a way that's elucidating: one is "here are the rules, work it out for yourself" and the other is "here is the answer, no information about how the rules generate it."
The first three steps are really easy and consist in removing a common x from something of the form h x = f (g x) by writing h = f . g. Essentially it's saying "if you can write the thing in the form a $ b $ c $ ... $ y $ z and you want to remove the z, change all the dollars to dots, a . b . c . ... . y:
bar f g xs = filter f (map g xs)
= filter f $ (map g xs)
= filter f $ map g $ xs -- because a $ b $ c == a $ (b $ c).
bar f g = filter f . map g
= (filter f .) (map g)
= (filter f .) $ map $ g
bar f = (filter f .) . map
So this last f is the only tricky part, and it's tricky because the f is not at the "end" of the expression. But looking at it, we see that this is a function section (. map) applied to the rest of the expression:
bar f = (.) (filter f) . map
bar f = (. map) $ (.) $ filter $ f
bar = (. map) . (.) . filter
and that's how you reduce an expression when you don't have complicated things like f x x and the like appearing in it. In general there is a function flip f x y = f y x which "flips arguments"; you can always use that to move the f to the other side. Here we have flip (.) map . (.) . filter if you include the explicit flip call.
I asked lambdabot, a robot who hangs out on various Haskell IRC channels, to automatically work out the point-free equivalent. The command is #pl (pointless).
10:41 <frase> #pl bar f g xs = filter f (map g xs )
10:41 <lambdabot> bar = (. map) . (.) . filter
The point free version of bar is:
bar = (. map) . (.) . filter
This is arguably less comprehensible than the original (non-point-free) code. Use your good judgement when deciding whether to use point-free style on a case-by-case basis.
Finally, if you don't care for IRC there are web-based point-free
converters such as pointfree.io, the pointfree command line program, and other tools.
On the page http://en.wikibooks.org/wiki/Haskell/do_Notation, there's a very handy way to transform the do syntax with binding to the functional form (I mean, using >>=). It works well for quite a few case, until I encountered a piece of code involving functions as monad ((->) r)
The code is
addStuff :: Int -> Int
addStuff = do
a <- (*2)
b <- (+10)
return (a+b)
this is equivalent as define
addStuff = \x -> x*2+(x+10)
Now if I use the handy way to rewrite the do part, I get
addStuff = (*2) >>= \a ->
(+10) >>= \b ->
a + b
which gives a compiling error. I understand that a, b are Int (or other types of Num), so the last function (\b -> a + b) has type Int -> Int, instead of Int -> Int -> Int.
But does this mean there's not always a way to transform from do to >>= ? Is there any fix to this? Or I'm just using the rule incorrectly?
Problem to make the last monadic:
addStuff = (*2) >>= \a ->
(+10) >>= \b ->
return (a + b)
(you've already been answered, that) The correct expression must use return on the last line:
addStuff = (*2) >>= \a ->
(+10) >>= \b ->
return (a + b)
(expounding on that, for some clarification) i.e. return is part of a monad definition, not of do notation.
Substituting the actual definitions for ((->) r) monad, it is equivalent to
addStuff x
= ((\a -> (\b -> return (a + b)) =<< (+10) ) =<< (*2) ) x
= (\a -> (\b -> return (a + b)) =<< (+10) ) (x*2) x
= ( (\b -> return ((x*2) + b)) =<< (+10) ) x
= (\b -> return ((x*2) + b)) (x+10) x
= return ((x*2) + (x+10)) x
= const ((x*2) + (x+10)) x
= (x*2) + (x+10)
as expected. So in general, for functions,
do { a <- f ; b <- g ; ... ; n <- h ; return r a b ... n }
is the same as
\ x -> let a = f x in let b = g x in ... let n = h x in r a b ... n
(except that each identifier a,b,...,n shouldn't appear in the corresponding function call, because let bindings are recursive, and do bindings aren't).
The above do code is also exactly how liftM2 is defined in Control.Monad:
> liftM2 (+) (*2) (+10) 100
310
liftM_N for any N can be coded with the use of liftM and ap:
> (\a b c -> a+b+c) `liftM` (*2) `ap` (+10) `ap` (+1000) $ 100
1410
liftM is the monadic equivalent of fmap, which for functions is (.), so
(+) `liftM` (*2) `ap` (+10) $ x
= (+) . (*2) `ap` (+10) $ x
= ((+) . (*2)) x ( (+10) x )
= (x*2) + (x+10)
because ap f g x = f x (g x) for functions (a.k.a. S-combinator).