Is there a way to capture the continuations in a do notation? - haskell

Since the following do block:
do
x <- foo
y <- bar
return x + y
is desugared to the following form:
foo >>= (\x -> bar >>= (\y -> return x + y))
aren't \x -> ... and y -> ... actually continuations here?
I was wondering if there is a way to capture the continuations in the definition of bind, but I can't get the types right. I.e:
data Pause a = Pause a | Stop
instance Monad Pause where
return x = Stop
m >>= k = Pause k -- this doesn't work of course
Now I tried muddling around with the types:
data Pause a = Pause a (a -> Pause ???) | Stop
------- k ------
But this doesn't work either. Is there no way to capture these implicit continuations?
Btw, I know about the Cont monad, I'm just experimenting and trying out stuff.

OK I'm not really sure, but let me put a few thoughts. I'm not quite sure what it
ought to mean to capture the continuation. You could, for example, capture the whole
do block in a structure:
{-# LANGUAGE ExistentialQuantification #-}
import Control.Monad
data MonadExp b = Return b | forall a. Bind (MonadExp a) (a -> MonadExp b)
instance Monad MonadExp where
return x = Return x
f >>= g = Bind f g
For example:
block :: MonadExp Int
block = do
x <- return 1
y <- return 2
return $ x + y
instance Show (MonadExp a) where
show (Return _) = "Return _"
show (Bind _ _) = "Bind _ _"
print block
>> Bind _ _
And then evaluate the whole thing:
finish :: MonadExp a -> a
finish (Return x) = x
finish (Bind f g) = finish $ g (finish f)
print $ finish block
>> 3
Or step through it and see the parts
step :: MonadExp a -> MonadExp a
step (Return _) = error "At the end"
step (Bind f g) = g $ finish f
print $ step block
>> Bind _ _
print $ step $ step block
>> Return _
Well, now that I think about it more, that's probably not what you're asking. But
maybe it'll help you think.

Well, I don't know if your lambdas are continuations in the strictest sense of the term, but they also look similar to this concept in my eye.
But note that if they are continuations, then the desugared monadic code is already written in continuation passing style (CPS). The usual notion of a control operator that "captures" a continuation is based on direct-style programs; the "captured" continuation is only implicit in the direct-style program, but the CPS transformation makes it explicit.
Since the desugared monadic code is already in CPS or something like it, well, maybe a way to frame your question is whether monadic code can express some of the control flow tricks that CPS code can. Typically, those tricks boil down to the idea that while under the CPS regime it is conventional for a function to finish by invoking its continuation, a function can choose to replace its continuation with another of its choosing. This replacement continuation can be constructed with a reference to the original continuation, so that it can in turn "restore" that original one if it chooses. So for example, coroutines are implemented as a mutual "replace/restore" cycle.
And looked at in this light, I think your answer is mostly no; CPS requires that in foo >>= bar, foo must be able to choose whether bar will be invoked at all, and foo must be abble to supply a substitute for bar, but (>>=) by itself does not offer a mechanism for foo to do this, and more importantly, (>>=) is in control of the execution flow, not foo. Some specific monads implement parts or all of it (for example, the Maybe monad allows foo to forego execution of bar by producing a Nothing result), but others don't.
Closest I could get is to forego (>>=) and use this instead:
-- | Execute action #foo# with its "continuation" #bar#.
callCC :: Monad m => ((a -> m b) -> m b) -> (a -> m b) -> m b
foo `callCC` bar = foo bar
Here foo can choose whether bar will be used at all. But notice that this callCC is really just ($)!

Related

confusion over the passing of State monad in Haskell

In Haskell the State is monad is passed around to extract and store state. And in the two following examples, both pass the State monad using >>, and a close verification (by function inlining and reduction) confirms that the state is indeed passed to the next step.
Yet this seems not very intuitive. So does this mean when I want to pass the State monad I just need >> (or the >>= and lambda expression \s -> a where s is not free in a)? Can anyone provide an intuitive explanation for this fact without bothering to reduce the function?
-- the first example
tick :: State Int Int
tick = get >>= \n ->
put (n+1) >>
return n
-- the second example
type GameValue = Int
type GameState = (Bool, Int)
playGame' :: String -> State GameState GameValue
playGame' [] = get >>= \(on, score) -> return score
playGame' (x: xs) = get >>= \(on, score) ->
case x of
'a' | on -> put (on, score+1)
'b' | on -> put (on, score-1)
'c' -> put (not on, score)
_ -> put (on, score)
>> playGame xs
Thanks a lot!
It really boils down to understanding that state is isomorphic to s -> (a, s). So any value "wrapped" in a monadic action is a result of applying a transformation to some state s (a stateful computation producing a).
Passing a state between two stateful computations
f :: a -> State s b
g :: b -> State s c
corresponds to composing them with >=>
f >=> g
or using >>=
\a -> f a >>= g
the result here is
a -> State s c
it is a stateful action that transforms some underlying state s in some way, it is allowed access to some a and it produces some c. So the entire transformation is allowed to depend on a and the value c is allowed to depend on some state s. This is exactly what you would want to express a stateful computation. The neat thing (and the sole purpose of expressing this machinery as a monad) is that you do not have to bother with passing the state around. But to understand how it is done, please refer to the definition of >>= on hackage), just ignore for a moment that it is a transformer rather than a final monad).
m >>= k = StateT $ \ s -> do
~(a, s') <- runStateT m s
runStateT (k a) s'
you can disregard the wrapping and unwrapping using StateT and runStateT, here m is in form s -> (a, s), k is of form a -> (s -> (b, s)), and you wish to produce a stateful transformation s -> (b, s). So the result is going to be a function of s, to produce b you can use k but you need a first, how do you produce a? you can take m and apply it to the state s, you get a modified state s' from the first monadic action m, and you pass that state into (k a) (which is of type s -> (b, s)). It is here that the state s has passed through m to become s' and be passed to k to become some final s''.
For you as a user of this mechanism, this remains hidden, and that is the neat thing about monads. If you want a state to evolve along some computation, you build your computation from small steps that you express as State-actions and you let do-notation or bind (>>=) to do the chaining/passing.
The sole difference between >>= and >> is that you either care or don't care about the non-state result.
a >> b
is in fact equivalent to
a >>= \_ -> b
so what ever value gets output by the action a, you throw it away (keeping only the modified state) and continue (pass the state along) with the other action b.
Regarding you examples
tick :: State Int Int
tick = get >>= \n ->
put (n+1) >>
return n
you can rewrite it in do-notation as
tick = do
n <- get
put (n + 1)
return n
while the first way of writing it makes it maybe more explicit what is passed how, the second way nicely shows how you do not have to care about it.
First get the current state and expose it (get :: s -> (s, s) in a simplified setting), the <- says that you do care about the value and you do not want to throw it away, the underlying state is also passed in the background without a change (that is how get works).
Then put :: s -> (s -> ((), s)), which is equivalent after dropping unnecessary parens to put :: s -> s -> ((), s), takes a value to replace the current state with (the first argument), and produces a stateful action whose result is the uninteresting value () which you drop (because you do not use <- or because you use >> instead of >>=). Due to put the underlying state has changed to n + 1 and as such it is passed on.
return does nothing to the underlying state, it only returns its argument.
To summarise, tick starts with some initial value s it updates it to s+1 internally and outputs s on the side.
The other example works exactly the same way, >> is only used there to throw away the () produced by put. But state gets passed around all the time.

Why does bind (>>=) exist? What are typical cases where a solution without bind is ugly?

This is a type declaration of a bind method:
(>>=) :: (Monad m) => m a -> (a -> m b) -> m b
I read this as follows: apply a function that returns a wrapped value, to a wrapped value.
This method was included to Prelude as part of Monad typeclass. That means there are a lot of cases where it's needed.
OK, but I don't understand why it's a typical solution of a typical case at all.
If you already created a function which returns a wrapped value, why that function doesn't already take a wrapped value?
In other words, what are typical cases where there are many functions which take a normal value, but return a wrapped value? (instead of taking a wrapped value and return a wrapped value)
The 'unwrapping' of values is exactly what you want to keep hidden when dealing with monads, since it is this that causes a lot of boilerplate.
For example, if you have a sequence of operations which return Maybe values that you want to combine, you have to manually propagate Nothing if you receive one:
nested :: a -> Maybe b
nested x = case f x of
Nothing -> Nothing
Just r ->
case g r of
Nothing -> Nothing
Just r' ->
case h r' of
Nothing -> Nothing
r'' -> i r''
This is what bind does for you:
Nothing >>= _ = Nothing
Just a >>= f = f a
so you can just write:
nested x = f x >>= g >>= h >>= i
Some monads don't allow you to manually unpack the values at all - the most common example is IO. The only way to get the value from an IO is to map or >>= and both of these require you to propagate IO in the output.
Everyone focuses on IO monad and inability to "unwrap".
But a Monad is not always a container, so you can't unwrap.
Reader r a == r->a such that (Reader r) is a Monad
to my mind is the simplest best example of a Monad that is not a container.
You can easily write a function that can produce m b given a: a->(r->b). But you can't easily "unwrap" the value from m a, because a is not wrapped in it. Monad is a type-level concept.
Also, notice that if you have m a->m b, you don't have a Monad. What Monad gives you, is a way to build a function m a->m b from a->m b (compare: Functor gives you a way to build a function m a->m b from a->b; ApplicativeFunctor gives you a way to build a function m a->m b from m (a->b))
If you already created a function which returns a wrapped value, why that function doesn't already take a wrapped value?
Because that function would have to unwrap its argument in order to do something with it.
But for many choices of m, you can only unwrap a value if you will eventually rewrap your own result. This idea of "unwrap, do something, then rewrap" is embodied in the (>>=) function which unwraps for you, let's you do something, and forces you to rewrap by the type a -> m b.
To understand why you cannot unwrap without eventually rewrapping, we can look at some examples:
If m a = Maybe a, unwrapping for Just x would be easy: just return x. But how can we unwrap Nothing? We cannot. But if we know that we will eventually rewrap, we can skip the "do something" step and return Nothing for the overall operation.
If m a = [a], unwrapping for [x] would be easy: just return x. But for unwrapping [], we need the same trick as for Maybe a. And what about unwrapping [x, y, z]? If we know that we will eventually rewrap, we can execute the "do something" three times, for x, y and z and concat the results into a single list.
If m a = IO a, no unwrapping is easy because we only know the result sometimes in the future, when we actually run the IO action. But if we know that we will eventually rewrap, we can store the "do something" inside the IO action and perform it later, when we execute the IO action.
I hope these examples make it clear that for many interesting choices of m, we can only implement unwrapping if we know that we are going to rewrap. The type of (>>=) allows precisely this assumption, so it is cleverly chosen to make things work.
While (>>=) can sometimes be useful when used directly, its main purpose is to implement the <- bind syntax in do notation. It has the type m a -> (a -> m b) -> m b mainly because, when used in a do notation block, the right hand side of the <- is of type m a, the left hand side "binds" an a to the given identifier and, when combined with remainder of the do block, is of type a -> m b, the resulting monadic action is of type m b, and this is the only type it possibly could have to make this work.
For example:
echo = do
input <- getLine
putStrLn input
The right hand side of the <- is of type IO String
The left hands side of the <- with the remainder of the do block are of type String -> IO (). Compare with the desugared version using >>=:
echo = getLine >>= (\input -> putStrLn input)
The left hand side of the >>= is of type IO String. The right hand side is of type String -> IO (). Now, by applying an eta reduction to the lambda we can instead get:
echo = getLine >>= putStrLn
which shows why >>= is sometimes used directly rather than as the "engine" that powers do notation along with >>.
I'd also like to provide what I think is an important correction to the concept of "unwrapping" a monadic value, which is that it doesn't happen. The Monad class does not provide a generic function of type Monad m => m a -> a. Some particular instances do but this is not a feature of monads in general. Monads, generally speaking, cannot be "unwrapped".
Remember that m >>= k = join (fmap k m) is a law that must be true for any monad. Any particular implementation of >>= must satisfy this law and so must be equivalent to this general implementation.
What this means is that what really happens is that the monadic "computation" a -> m b is "lifted" to become an m a -> m (m b) using fmap and then applied the m a, giving an m (m b); and then join :: m (m a) -> m a is used to squish the two ms together to yield a m b. So the a never gets "out" of the monad. The monad is never "unwrapped". This is an incorrect way to think about monads and I would strongly recommend that you not get in the habit.
I will focus on your point
If you already created a function which returns a wrapped value, why
that function doesn't already take a wrapped value?
and the IO monad. Suppose you had
getLine :: IO String
putStrLn :: IO String -> IO () -- "already takes a wrapped value"
how one could write a program which reads a line and print it twice? An attempt would be
let line = getLine
in putStrLn line >> putStrLn line
but equational reasoning dictates that this is equivalent to
putStrLn getLine >> putStrLn getLine
which reads two lines instead.
What we lack is a way to "unwrap" the getLine once, and use it twice. The same issue would apply to reading a line, printing "hello", and then printing a line:
let line = getLine in putStrLn "hello" >> putStrLn line
-- equivalent to
putStrLn "hello" >> putStrLn getLine
So, we also lack a way to specify "when to unwrap" the getLine. The bind >>= operator provides a way to do this.
A more advanced theoretical note
If you swap the arguments around the (>>=) bind operator becomes (=<<)
(=<<) :: (a -> m b) -> (m a -> m b)
which turns any function f taking an unwrapped value into a function g taking a wrapped
value. Such g is known as the Kleisli extension of f. The bind operator guarantees
such an extension always exists, and provides a convenient way to use it.
Because we like to be able to apply functions like a -> b to our m as. Lifting such a function to m a -> m b is trivial (liftM, liftA, >>= return ., fmap) but the opposite is not necessarily possible.
You want some typical examples? How about putStrLn :: String -> IO ()? It would make no sense for this function to have the type IO String -> IO () because the origin of the string doesn't matter.
Anyway: You might have the wrong idea because of your "wrapped value" metaphor; I use it myself quite often, but it has its limitations. There isn't necessarily a pure way to get an a out of an m a - for example, if you have a getLine :: IO String, there's not a great deal of interesting things you can do with it - you can put it in a list, chain it in a row and other neat things, but you can't get any useful information out of it because you can't look inside an IO action. What you can do is use >>= which gives you a way to use the result of the action.
Similar things apply to monads where the "wrapping" metaphor applies too; For example the point Maybe monad is to avoid manually wrapping and unwrapping values with and from Just all the time.
My two most common examples:
1) I have a series of functions that generate a list of lists, but I finally need a flat list:
f :: a -> [a]
fAppliedThrice :: [a] -> [a]
fAppliedThrice aList = concat (map f (concat (map f (concat (map f a)))))
fAppliedThrice' :: [a] -> [a]
fAppliedThrice' aList = aList >>= f >>= f >>= f
A practical example of using this was when my functions fetched attributes of a foreign key relationship. I could just chain them together to finally obtain a flat list of attributes. Eg: Product hasMany Review hasMany Tag type relationship, and I finally want a list of all the tag names for a product. (I added some template-haskell and got a very good generic attribute fetcher for my purposes).
2) Say you have a series of filter-like functions to apply to some data. And they return Maybe values.
case (val >>= filter >>= filter2 >>= filter3) of
Nothing -> putStrLn "Bad data"
Just x -> putStrLn "Good data"

A monad with a "real" return function

A common beginner's mistake is to see return and think that it's a language keyword that exits the current function with a return value. Of course, we know that isn't what it does at all. But I wonder... could we actually make such a function? Purely for argument's sake, at this point.
It seems we're looking for some monad Foo which possesses a function
exit :: x -> Foo x
which will abort the rest of the computation and immediately return x.
Is such a thing constructable? If it could be constructed, would it be useful? Is this even a sane thing to attempt to do?
Yes, it is possible in the Cont monad. The Cont is defined as follows:
newtype Cont r a = Cont {runCont :: (a -> r) -> r}
instance Monad (Cont r) where
return a = Cont ($ a)
m >>= k = Cont $ \c -> runCont m $ \a -> runCont (k a) c
Now we can create exit as follows:
exit = Cont . const
In fact you can do it in almost any monad (for example you could do it in the Either monad but not in the State monad). However the Cont monad is the mother of all monads: http://blog.sigfpe.com/2008/12/mother-of-all-monads.html
Of course it's doable. It's even already in the standard libraries. You just need a slightly saner type signature.
import Control.Monad
exit :: a -> Either a b
exit = Left
main = print $ do
x <- Right 5
exit "foo"
error "this is a good place to crash"
The important thing to note is that the type given when bailing out isn't free - it has to match the type of the Either.
It appears you can just about implement this:
data Foo e x =
Next x |
Done e
instance Monad (Foo e) where
return = Next
(Next x) >>= f = f x
(Done e) >>= f = Done e
exit :: e -> Foo e x
exit = Done
run :: Foo x x -> x
run (Next x) = x
run (Done x) = x
Notice how the monad has an extra type parameter; I don't think it's possible to avoid this. Basically Foo e x produces an x and will "ultimately" produce an e. Note that the run function requires both types to match; you could equally demand that run only works if exit gets called somewhere, or one of a few other possibilities...
So, yes you can construct this, and no, it's not especially useful AFAIK.
If you want to satisfy the monad laws, you certainly can't call it return since we need that return a >>= f = f a.
Also, I don't think it can exist at all with the precise specification given. Suppose x :: a and f :: a -> Foo b, then just following the types, exit a >>= f :: Foo b yet we want it to produce x :: a.

What advantage does Monad give us over an Applicative?

I've read this article, but didn't understand last section.
The author says that Monad gives us context sensitivity, but it's possible to achieve the same result using only an Applicative instance:
let maybeAge = (\futureYear birthYear -> if futureYear < birthYear
then yearDiff birthYear futureYear
else yearDiff futureYear birthYear) <$> (readMay futureYearString) <*> (readMay birthYearString)
It's uglier for sure without do-syntax, but beside that I don't see why we need Monad. Can anyone clear this up for me?
Here's a couple of functions that use the Monad interface.
ifM :: Monad m => m Bool -> m a -> m a -> m a
ifM c x y = c >>= \z -> if z then x else y
whileM :: Monad m => (a -> m Bool) -> (a -> m a) -> a -> m a
whileM p step x = ifM (p x) (step x >>= whileM p step) (return x)
You can't implement them with the Applicative interface. But for the sake of enlightenment, let's try and see where things go wrong. How about..
import Control.Applicative
ifA :: Applicative f => f Bool -> f a -> f a -> f a
ifA c x y = (\c' x' y' -> if c' then x' else y') <$> c <*> x <*> y
Looks good! It has the right type, it must be the same thing! Let's just check to make sure..
*Main> ifM (Just True) (Just 1) (Just 2)
Just 1
*Main> ifM (Just True) (Just 1) (Nothing)
Just 1
*Main> ifA (Just True) (Just 1) (Just 2)
Just 1
*Main> ifA (Just True) (Just 1) (Nothing)
Nothing
And there's your first hint at the difference. You can't write a function using just the Applicative interface that replicates ifM.
If you divide this up into thinking about values of the form f a as being about "effects" and "results" (both of which are very fuzzy approximate terms that are the best terms available, but not very good), you can improve your understanding here. In the case of values of type Maybe a, the "effect" is success or failure, as a computation. The "result" is a value of type a that might be present when the computation completes. (The meanings of these terms depends heavily on the concrete type, so don't think this is a valid description of anything other than Maybe as a type.)
Given that setting, we can look at the difference in a bit more depth. The Applicative interface allows the "result" control flow to be dynamic, but it requires the "effect" control flow to be static. If your expression involves 3 computations that can fail, the failure of any one of them causes the failure of the whole computation. The Monad interface is more flexible. It allows the "effect" control flow to depend on the "result" values. ifM chooses which argument's "effects" to include in its own "effects" based on its first argument. This is the huge fundamental difference between ifA and ifM.
There's something even more serious going on with whileM. Let's try to make whileA and see what happens.
whileA :: Applicative f => (a -> f Bool) -> (a -> f a) -> a -> f a
whileA p step x = ifA (p x) (whileA p step <*> step x) (pure x)
Well.. What happens is a compile error. (<*>) doesn't have the right type there. whileA p step has the type a -> f a and step x has the type f a. (<*>) isn't the right shape to fit them together. For it to work, the function type would need to be f (a -> a).
You can try lots more things - but you'll eventually find that whileA has no implementation that works anything even close to the way whileM does. I mean, you can implement the type, but there's just no way to make it both loop and terminate.
Making it work requires either join or (>>=). (Well, or one of the many equivalents of one of those) And those the extra things you get out of the Monad interface.
With monads, subsequent effects can depend on previous values. For example, you can have:
main = do
b <- readLn :: IO Bool
if b
then fireMissiles
else return ()
You can't do that with Applicatives - the result value of one effectfull computation can't determine what effect will follow.
Somewhat related:
Why can applicative functors have side effects, but functors can't?
Good examples of Not a Functor/Functor/Applicative/Monad?
As Stephen Tetley said in a comment, that example doesn't actually use context-sensitivity. One way to think about context-sensitivity is that it lets use choose which actions to take depending on monadic values. Applicative computations must always have the same "shape", in a certain sense, regardless of the values involved; monadic computations need not. I personally think this is easier to understand with a concrete example, so let's look at one. Here's two versions of a simple program which ask you to enter a password, check that you entered the right one, and print out a response depending on whether or not you did.
import Control.Applicative
checkPasswordM :: IO ()
checkPasswordM = do putStrLn "What's the password?"
pass <- getLine
if pass == "swordfish"
then putStrLn "Correct. The secret answer is 42."
else putStrLn "INTRUDER ALERT! INTRUDER ALERT!"
checkPasswordA :: IO ()
checkPasswordA = if' . (== "swordfish")
<$> (putStrLn "What's the password?" *> getLine)
<*> putStrLn "Correct. The secret answer is 42."
<*> putStrLn "INTRUDER ALERT! INTRUDER ALERT!"
if' :: Bool -> a -> a -> a
if' True t _ = t
if' False _ f = f
Let's load this into GHCi and check what happens with the monadic version:
*Main> checkPasswordM
What's the password?
swordfish
Correct. The secret answer is 42.
*Main> checkPasswordM
What's the password?
zvbxrpl
INTRUDER ALERT! INTRUDER ALERT!
So far, so good. But if we use the applicative version:
*Main> checkPasswordA
What's the password?
hunter2
Correct. The secret answer is 42.
INTRUDER ALERT! INTRUDER ALERT!
We entered the wrong password, but we still got the secret! And an intruder alert! This is because <$> and <*>, or equivalently liftAn/liftMn, always execute the effects of all their arguments. The applicative version translates, in do notation, to
do pass <- putStrLn "What's the password?" *> getLine)
unit1 <- putStrLn "Correct. The secret answer is 42."
unit2 <- putStrLn "INTRUDER ALERT! INTRUDER ALERT!"
pure $ if' (pass == "swordfish") unit1 unit2
And it should be clear why this has the wrong behavior. In fact, every use of applicative functors is equivalent to monadic code of the form
do val1 <- app1
val2 <- app2
...
valN <- appN
pure $ f val1 val2 ... valN
(where some of the appI are allowed to be of the form pure xI). And equivalently, any monadic code in that form can be rewritten as
f <$> app1 <*> app2 <*> ... <*> appN
or equivalently as
liftAN f app1 app2 ... appN
To think about this, consider Applicative's methods:
pure :: a -> f a
(<$>) :: (a -> b) -> f a -> f b
(<*>) :: f (a -> b) -> f a -> f b
And then consider what Monad adds:
(=<<) :: (a -> m b) -> m a -> m b
join :: m (m a) -> m a
(Remember that you only need one of those.)
Handwaving a lot, if you think about it, the only way we can put together the applicative functions is to construct chains of the form f <$> app1 <*> ... <*> appN, and possibly nest those chains (e.g., f <$> (g <$> x <*> y) <*> z). However, (=<<) (or (>>=)) allows us to take a value and produce different monadic computations depending on that value, that could be constructed on the fly. This is what we use to decide whether to compute "print out the secret", or compute "print out an intruder alert", and why we can't make that decision with applicative functors alone; none of the types for applicative functions allow you to consume a plain value.
You can think about join in concert with fmap in a similar way: as I mentioned in a comment, you can do something like
checkPasswordFn :: String -> IO ()
checkPasswordFn pass = if pass == "swordfish"
then putStrLn "Correct. The secret answer is 42."
else putStrLn "INTRUDER ALERT! INTRUDER ALERT!"
checkPasswordA' :: IO (IO ())
checkPasswordA' = checkPasswordFn <$> (putStrLn "What's the password?" *> getLine)
This is what happens when we want to pick a different computation depending on the value, but only have applicative functionality available us. We can pick two different computations to return, but they're wrapped inside the outer layer of the applicative functor. To actually use the computation we've picked, we need join:
checkPasswordM' :: IO ()
checkPasswordM' = join checkPasswordA'
And this does the same thing as the previous monadic version (as long as we import Control.Monad first, to get join):
*Main> checkPasswordM'
What's the password?
12345
INTRUDER ALERT! INTRUDER ALERT!
On the other hand, here's a a practical example of the Applicative/Monad divide where Applicatives have an advantage: error handling! We clearly have a Monad implementation of Either that carries along errors, but it always terminates early.
Left e1 >> Left e2 === Left e1
You can think of this as an effect of intermingling values and contexts. Since (>>=) will try to pass the result of the Either e a value to a function like a -> Either e b, it must fail immediately if the input Either is Left.
Applicatives only pass their values to the final pure computation after running all of the effects. This means they can delay accessing the values for longer and we can write this.
data AllErrors e a = Error e | Pure a deriving (Functor)
instance Monoid e => Applicative (AllErrors e) where
pure = Pure
(Pure f) <*> (Pure x) = Pure (f x)
(Error e) <*> (Pure _) = Error e
(Pure _) <*> (Error e) = Error e
-- This is the non-Monadic case
(Error e1) <*> (Error e2) = Error (e1 <> e2)
It's impossible to write a Monad instance for AllErrors such that ap matches (<*>) because (<*>) takes advantage of running both the first and second contexts before using any values in order to get both errors and (<>) them together. Monadic (>>=) and (join) can only access contexts interwoven with their values. That's why Either's Applicative instance is left-biased, so that it can also have a harmonious Monad instance.
> Left "a" <*> Left "b"
Left 'a'
> Error "a" <*> Error "b"
Error "ab"
With Applicative, the sequence of effectful actions to be performed is fixed at compile-time. With Monad, it can be varied at run-time based on the results of effects.
For example, with an Applicative parser, the sequence of parsing actions is fixed for all time. That means that you can potentially perform "optimisations" on it. On the other hand, I can write a Monadic parser which parses some a BNF grammar description, dynamically constructs a parser for that grammar, and then runs that parser over the rest of the input. Every time you run this parser, it potentially constructs a brand new parser to parse the second portion of the input. Applicative has no hope of doing such a thing - and there is no chance of performing compile-time optimisations on a parser that doesn't exist yet...
As you can see, sometimes the "limitation" of Applicative is actually beneficial - and sometimes the extra power offered by Monad is required to get the job done. This is why we have both.
If you try to convert the type signature of Monad's bind and Applicative <*> to natural language, you will find that:
bind : I will give you the contained value and you will return me a new packaged value
<*>: You give me a packaged function that accepts a contained value and return a value and I will use it to create new packaged value based on my rules.
Now as you can see from the above description, bind gives you more control as compared to <*>
If you work with Applicatives, the "shape" of the result is already determined by the "shape" of the input, e.g. if you call [f,g,h] <*> [a,b,c,d,e], your result will be a list of 15 elements, regardless which values the variables have. You don't have this guarantee/limitation with monads. Consider [x,y,z] >>= join replicate: For [0,0,0] you'll get the result [], for [1,2,3] the result [1,2,2,3,3,3].
Now that ApplicativeDo extension become pretty common thing, the difference between Monad and Applicative can be illustrated using simple code snippet.
With Monad you can do
do
r1 <- act1
if r1
then act2
else act3
but having only Applicative do-block, you can't use if on things you've pulled out with <-.

Why monads? How does it resolve side-effects?

I am learning Haskell and trying to understand Monads. I have two questions:
From what I understand, Monad is just another typeclass that declares ways to interact with data inside "containers", including Maybe, List, and IO. It seems clever and clean to implement these 3 things with one concept, but really, the point is so there can be clean error handling in a chain of functions, containers, and side effects. Is this a correct interpretation?
How exactly is the problem of side-effects solved? With this concept of containers, the language essentially says anything inside the containers is non-deterministic (such as i/o). Because lists and IOs are both containers, lists are equivalence-classed with IO, even though values inside lists seem pretty deterministic to me. So what is deterministic and what has side-effects? I can't wrap my head around the idea that a basic value is deterministic, until you stick it in a container (which is no special than the same value with some other values next to it, e.g. Nothing) and it can now be random.
Can someone explain how, intuitively, Haskell gets away with changing state with inputs and output? I'm not seeing the magic here.
The point is so there can be clean error handling in a chain of functions, containers, and side effects. Is this a correct interpretation?
Not really. You've mentioned a lot of concepts that people cite when trying to explain monads, including side effects, error handling and non-determinism, but it sounds like you've gotten the incorrect sense that all of these concepts apply to all monads. But there's one concept you mentioned that does: chaining.
There are two different flavors of this, so I'll explain it two different ways: one without side effects, and one with side effects.
No Side Effects:
Take the following example:
addM :: (Monad m, Num a) => m a -> m a -> m a
addM ma mb = do
a <- ma
b <- mb
return (a + b)
This function adds two numbers, with the twist that they are wrapped in some monad. Which monad? Doesn't matter! In all cases, that special do syntax de-sugars to the following:
addM ma mb =
ma >>= \a ->
mb >>= \b ->
return (a + b)
... or, with operator precedence made explicit:
ma >>= (\a -> mb >>= (\b -> return (a + b)))
Now you can really see that this is a chain of little functions, all composed together, and its behavior will depend on how >>= and return are defined for each monad. If you're familiar with polymorphism in object-oriented languages, this is essentially the same thing: one common interface with multiple implementations. It's slightly more mind-bending than your average OOP interface, since the interface represents a computation policy rather than, say, an animal or a shape or something.
Okay, let's see some examples of how addM behaves across different monads. The Identity monad is a decent place to start, since its definition is trivial:
instance Monad Identity where
return a = Identity a -- create an Identity value
(Identity a) >>= f = f a -- apply f to a
So what happens when we say:
addM (Identity 1) (Identity 2)
Expanding this, step by step:
(Identity 1) >>= (\a -> (Identity 2) >>= (\b -> return (a + b)))
(\a -> (Identity 2) >>= (\b -> return (a + b)) 1
(Identity 2) >>= (\b -> return (1 + b))
(\b -> return (1 + b)) 2
return (1 + 2)
Identity 3
Great. Now, since you mentioned clean error handling, let's look at the Maybe monad. Its definition is only slightly trickier than Identity:
instance Monad Maybe where
return a = Just a -- same as Identity monad!
(Just a) >>= f = f a -- same as Identity monad again!
Nothing >>= _ = Nothing -- the only real difference from Identity
So you can imagine that if we say addM (Just 1) (Just 2) we'll get Just 3. But for grins, let's expand addM Nothing (Just 1) instead:
Nothing >>= (\a -> (Just 1) >>= (\b -> return (a + b)))
Nothing
Or the other way around, addM (Just 1) Nothing:
(Just 1) >>= (\a -> Nothing >>= (\b -> return (a + b)))
(\a -> Nothing >>= (\b -> return (a + b)) 1
Nothing >>= (\b -> return (1 + b))
Nothing
So the Maybe monad's definition of >>= was tweaked to account for failure. When a function is applied to a Maybe value using >>=, you get what you'd expect.
Okay, so you mentioned non-determinism. Yes, the list monad can be thought of as modeling non-determinism in a sense... It's a little weird, but think of the list as representing alternative possible values: [1, 2, 3] is not a collection, it's a single non-deterministic number that could be either one, two or three. That sounds dumb, but it starts to make some sense when you think about how >>= is defined for lists: it applies the given function to each possible value. So addM [1, 2] [3, 4] is actually going to compute all possible sums of those two non-deterministic values: [4, 5, 5, 6].
Okay, now to address your second question...
Side Effects:
Let's say you apply addM to two values in the IO monad, like:
addM (return 1 :: IO Int) (return 2 :: IO Int)
You don't get anything special, just 3 in the IO monad. addM does not read or write any mutable state, so it's kind of no fun. Same goes for the State or ST monads. No fun. So let's use a different function:
fireTheMissiles :: IO Int -- returns the number of casualties
Clearly the world will be different each time missiles are fired. Clearly. Now let's say you're trying to write some totally innocuous, side effect free, non-missile-firing code. Perhaps you're trying once again to add two numbers, but this time without any monads flying around:
add :: Num a => a -> a -> a
add a b = a + b
and all of a sudden your hand slips, and you accidentally typo:
add a b = a + b + fireTheMissiles
An honest mistake, really. The keys were so close together. Fortunately, because fireTheMissiles was of type IO Int rather than simply Int, the compiler is able to avert disaster.
Okay, totally contrived example, but the point is that in the case of IO, ST and friends, the type system keeps effects isolated to some specific context. It doesn't magically eliminate side effects, making code referentially transparent that shouldn't be, but it does make it clear at compile time what scope the effects are limited to.
So getting back to the original point: what does this have to do with chaining or composition of functions? Well, in this case, it's just a handy way of expressing a sequence of effects:
fireTheMissilesTwice :: IO ()
fireTheMissilesTwice = do
a <- fireTheMissiles
print a
b <- fireTheMissiles
print b
Summary:
A monad represents some policy for chaining computations. Identity's policy is pure function composition, Maybe's policy is function composition with failure propogation, IO's policy is impure function composition and so on.
Let me start by pointing at the excellent "You could have invented monads" article. It illustrates how the Monad structure can naturally manifest while you are writing programs. But the tutorial doesn't mention IO, so I will have a stab here at extending the approach.
Let us start with what you probably have already seen - the container monad. Let's say we have:
f, g :: Int -> [Int]
One way of looking at this is that it gives us a number of possible outputs for every possible input. What if we want all possible outputs for the composition of both functions? Giving all possibilities we could get by applying the functions one after the other?
Well, there's a function for that:
fg x = concatMap g $ f x
If we put this more general, we get
fg x = f x >>= g
xs >>= f = concatMap f xs
return x = [x]
Why would we want to wrap it like this? Well, writing our programs primarily using >>= and return gives us some nice properties - for example, we can be sure that it's relatively hard to "forget" solutions. We'd explicitly have to reintroduce it, say by adding another function skip. And also we now have a monad and can use all combinators from the monad library!
Now, let us jump to your trickier example. Let's say the two functions are "side-effecting". That's not non-deterministic, it just means that in theory the whole world is both their input (as it can influence them) as well as their output (as the function can influence it). So we get something like:
f, g :: Int -> RealWorld# -> (Int, RealWorld#)
If we now want f to get the world that g left behind, we'd write:
fg x rw = let (y, rw') = f x rw
(r, rw'') = g y rw'
in (r, rw'')
Or generalized:
fg x = f x >>= g
x >>= f = \rw -> let (y, rw') = x rw
(r, rw'') = f y rw'
in (r, rw'')
return x = \rw -> (x, rw)
Now if the user can only use >>=, return and a few pre-defined IO values we get a nice property again: The user will never actually see the RealWorld# getting passed around! And that is a very good thing, as you aren't really interested in the details of where getLine gets its data from. And again we get all the nice high-level functions from the monad libraries.
So the important things to take away:
The monad captures common patterns in your code, like "always pass all elements of container A to container B" or "pass this real-world-tag through". Often, once you realize that there is a monad in your program, complicated things become simply applications of the right monad combinator.
The monad allows you to completely hide the implementation from the user. It is an excellent encapsulation mechanism, be it for your own internal state or for how IO manages to squeeze non-purity into a pure program in a relatively safe way.
Appendix
In case someone is still scratching his head over RealWorld# as much as I did when I started: There's obviously more magic going on after all the monad abstraction has been removed. Then the compiler will make use of the fact that there can only ever be one "real world". That's good news and bad news:
It follows that the compiler must guarantuee execution ordering between functions (which is what we were after!)
But it also means that actually passing the real world isn't necessary as there is only one we could possibly mean: The one that is current when the function gets executed!
Bottom line is that once execution order is fixed, RealWorld# simply gets optimized out. Therefore programs using the IO monad actually have zero runtime overhead. Also note that using RealWorld# is obviously only one possible way to put IO - but it happens to be the one GHC uses internally. The good thing about monads is that, again, the user really doesn't need to know.
You could see a given monad m as a set/family (or realm, domain, etc.) of actions (think of a C statement). The monad m defines the kind of (side-)effects that its actions may have:
with [] you can define actions which can fork their executions in different "independent parallel worlds";
with Either Foo you can define actions which can fail with errors of type Foo;
with IO you can define actions which can have side-effects on the "outside world" (access files, network, launch processes, do a HTTP GET ...);
you can have a monad whose effect is "randomness" (see package MonadRandom);
you can define a monad whose actions can make a move in a game (say chess, Go…) and receive move from an opponent but are not able to write to your filesystem or anything else.
Summary
If m is a monad, m a is an action which produces a result/output of type a.
The >> and >>= operators are used to create more complex actions out of simpler ones:
a >> b is a macro-action which does action a and then action b;
a >> a does action a and then action a again;
with >>= the second action can depend on the output of the first one.
The exact meaning of what an action is and what doing an action and then another one is depends on the monad: each monad defines an imperative sublanguage with some features/effects.
Simple sequencing (>>)
Let's say with have a given monad M and some actions incrementCounter, decrementCounter, readCounter:
instance M Monad where ...
-- Modify the counter and do not produce any result:
incrementCounter :: M ()
decrementCounter :: M ()
-- Get the current value of the counter
readCounter :: M Integer
Now we would like to do something interesting with those actions. The first thing we would like to do with those actions is to sequence them. As in say C, we would like to be able to do:
// This is C:
counter++;
counter++;
We define an "sequencing operator" >>. Using this operator we can write:
incrementCounter >> incrementCounter
What is the type of "incrementCounter >> incrementCounter"?
It is an action made of two smaller actions like in C you can write composed-statements from atomic statements :
// This is a macro statement made of several statements
{
counter++;
counter++;
}
// and we can use it anywhere we may use a statement:
if (condition) {
counter++;
counter++;
}
it can have the same kind of effects as its subactions;
it does not produce any output/result.
So we would like incrementCounter >> incrementCounter to be of type M (): an (macro-)action with the same kind of possible effects but without any output.
More generally, given two actions:
action1 :: M a
action2 :: M b
we define a a >> b as the macro-action which is obtained by doing (whatever that means in our domain of action) a then b and produces as output the result of the execution of the second action. The type of >> is:
(>>) :: M a -> M b -> M b
or more generally:
(>>) :: (Monad m) => m a -> m b -> m b
We can define bigger sequence of actions from simpler ones:
action1 >> action2 >> action3 >> action4
Input and outputs (>>=)
We would like to be able to increment by something else that 1 at a time:
incrementBy 5
We want to provide some input in our actions, in order to do this we define a function incrementBy taking an Int and producing an action:
incrementBy :: Int -> M ()
Now we can write things like:
incrementCounter >> readCounter >> incrementBy 5
But we have no way to feed the output of readCounter into incrementBy. In order to do this, a slightly more powerful version of our sequencing operator is needed. The >>= operator can feed the output of a given action as input to the next action. We can write:
readCounter >>= incrementBy
It is an action which executes the readCounter action, feeds its output in the incrementBy function and then execute the resulting action.
The type of >>= is:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
A (partial) example
Let's say I have a Prompt monad which can only display informations (text) to the user and ask informations to the user:
-- We don't have access to the internal structure of the Prompt monad
module Prompt (Prompt(), echo, prompt) where
-- Opaque
data Prompt a = ...
instance Monad Prompt where ...
-- Display a line to the CLI:
echo :: String -> Prompt ()
-- Ask a question to the user:
prompt :: String -> Prompt String
Let's try to define a promptBoolean message actions which asks for a question and produces a boolean value.
We use the prompt (message ++ "[y/n]") action and feed its output to a function f:
f "y" should be an action which does nothing but produce True as output;
f "n" should be an action which does nothing but produce False as output;
anything else should restart the action (do the action again);
promptBoolean would look like this:
-- Incomplete version, some bits are missing:
promptBoolean :: String -> M Boolean
promptBoolean message = prompt (message ++ "[y/n]") >>= f
where f result = if result == "y"
then ???? -- We need here an action which does nothing but produce `True` as output
else if result=="n"
then ???? -- We need here an action which does nothing but produce `False` as output
else echo "Input not recognised, try again." >> promptBoolean
Producing a value without effect (return)
In order to fill the missing bits in our promptBoolean function, we need a way to represent dummy actions without any side effect but which only outputs a given value:
-- "return 5" is an action which does nothing but outputs 5
return :: (Monad m) => a -> m a
and we can now write out promptBoolean function:
promptBoolean :: String -> Prompt Boolean
promptBoolean message :: prompt (message ++ "[y/n]") >>= f
where f result = if result=="y"
then return True
else if result=="n"
then return False
else echo "Input not recognised, try again." >> promptBoolean message
By composing those two simple actions (promptBoolean, echo) we can define any kind of dialogue between the user and your program (the actions of the program are deterministic as our monad does not have a "randomness effect").
promptInt :: String -> M Int
promptInt = ... -- similar
-- Classic "guess a number game/dialogue"
guess :: Int -> m()
guess n = promptInt "Guess:" m -> f
where f m = if m == n
then echo "Found"
else (if m > n
then echo "Too big"
then echo "Too small") >> guess n
The operations of a monad
A Monad is a set of actions which can be composed with the return and >>= operators:
>>= for action composition;
return for producing a value without any (side-)effect.
These two operators are the minimal operators needed to define a Monad.
In Haskell, the >> operator is needed as well but it can in fact be derived from >>=:
(>>): Monad m => m a -> m b -> m b
a >> b = a >>= f
where f x = b
In Haskell, an extra fail operator is need as well but this is really a hack (and it might be removed from Monad in the future).
This is the Haskell definition of a Monad:
class Monad m where
return :: m a
(>>=) :: m a -> (a -> m b) -> m b
(>>) :: m a -> m b -> m b -- can be derived from (>>=)
fail :: String -> m a -- mostly a hack
Actions are first-class
One great thing about monads is that actions are first-class. You can take them in a variable, you can define function which take actions as input and produce some other actions as output. For example, we can define a while operator:
-- while x y : does action y while action x output True
while :: (Monad m) => m Boolean -> m a -> m ()
while x y = x >>= f
where f True = y >> while x y
f False = return ()
Summary
A Monad is a set of actions in some domain. The monad/domain define the kind of "effects" which are possible. The >> and >>= operators represent sequencing of actions and monadic expression may be used to represent any kind of "imperative (sub)program" in your (functional) Haskell program.
The great things are that:
you can design your own Monad which supports the features and effects that you want
see Prompt for an example of a "dialogue only subprogram",
see Rand for an example of "sampling only subprogram";
you can write your own control structures (while, throw, catch or more exotic ones) as functions taking actions and composing them in some way to produce a bigger macro-actions.
MonadRandom
A good way of understanding monads, is the MonadRandom package. The Rand monad is made of actions whose output can be random (the effect is randomness). An action in this monad is some kind of random variable (or more exactly a sampling process):
-- Sample an Int from some distribution
action :: Rand Int
Using Rand to do some sampling/random algorithms is quite interesting because you have random variables as first class values:
-- Estimate mean by sampling nsamples times the random variable x
sampleMean :: Real a => Int -> m a -> m a
sampleMean n x = ...
In this setting, the sequence function from Prelude,
sequence :: Monad m => [m a] -> m [a]
becomes
sequence :: [Rand a] -> Rand [a]
It creates a random variable obtained by sampling independently from a list of random variables.
There are three main observations concerning the IO monad:
1) You can't get values out of it. Other types like Maybe might allow to extract values, but neither the monad class interface itself nor the IO data type allow it.
2) "Inside" IO is not only the real value but also that "RealWorld" thing. This dummy value is used to enforce the chaining of actions by the type system: If you have two independent calculations, the use of >>= makes the second calculation dependent on the first.
3) Assume a non-deterministic thing like random :: () -> Int, which isn't allowed in Haskell. If you change the signature to random :: Blubb -> (Blubb, Int), it is allowed, if you make sure that nobody ever can use a Blubb twice: Because in that case all inputs are "different", it is no problem that the outputs are different as well.
Now we can use the fact 1): Nobody can get something out of IO, so we can use the RealWord dummy hidden in IO to serve as a Blubb. There is only one IOin the whole application (the one we get from main), and it takes care of proper sequentiation, as we have seen in 2). Problem solved.
One thing that often helps me to understand the nature of something is to examine it in the most trivial way possible. That way, I'm not getting distracted by potentially unrelated concepts. With that in mind, I think it may be helpful to understand the nature of the Identity Monad, as it's the most trivial implementation of a Monad possible (I think).
What is interesting about the Identity Monad? I think it is that it allows me to express the idea of evaluating expressions in a context defined by other expressions. And to me, that is the essence of every Monad I've encountered (so far).
If you already had a lot of exposure to 'mainstream' programming languages before learning Haskell (like I did), then this doesn't seem very interesting at all. After all, in a mainstream programming language, statements are executed in sequence, one after the other (excepting control-flow constructs, of course). And naturally, we can assume that every statement is evaluated in the context of all previously executed statements and that those previously executed statements may alter the environment and the behavior of the currently executing statement.
All of that is pretty much a foreign concept in a functional, lazy language like Haskell. The order in which computations are evaluated in Haskell is well-defined, but sometimes hard to predict, and even harder to control. And for many kinds of problems, that's just fine. But other sorts of problems (e.g. IO) are hard to solve without some convenient way to establish an implicit order and context between the computations in your program.
As far as side-effects go, specifically, often they can be transformed (via a Monad) in to simple state-passing, which is perfectly legal in a pure functional language. Some Monads don't seem to be of that nature, however. Monads such as the IO Monad or the ST monad literally perform side-effecting actions. There are many ways to think about this, but one way that I think about it is that just because my computations must exist in a world without side-effects, the Monad may not. As such, the Monad is free to establish a context for my computation to execute that is based on side-effects defined by other computations.
Finally, I must disclaim that I am definitely not a Haskell expert. As such, please understand that everything I've said is pretty much my own thoughts on this subject and I may very well disown them later when I understand Monads more fully.
the point is so there can be clean error handling in a chain of functions, containers, and side effects
More or less.
how exactly is the problem of side-effects solved?
A value in the I/O monad, i.e. one of type IO a, should be interpreted as a program. p >> q on IO values can then be interpreted as the operator that combines two programs into one that first executes p, then q. The other monad operators have similar interpretations. By assigning a program to the name main, you declare to the compiler that that is the program that has to be executed by its output object code.
As for the list monad, it's not really related to the I/O monad except in a very abstract mathematical sense. The IO monad gives deterministic computation with side effects, while the list monad gives non-deterministic (but not random!) backtracking search, somewhat similar to Prolog's modus operandi.
With this concept of containers, the language essentially says anything inside the containers is non-deterministic
No. Haskell is deterministic. If you ask for integer addition 2+2 you will always get 4.
"Nondeterministic" is only a metaphor, a way of thinking. Everything is deterministic under the hood. If you have this code:
do x <- [4,5]
y <- [0,1]
return (x+y)
it is roughly equivalent to Python code
l = []
for x in [4,5]:
for y in [0,1]:
l.append(x+y)
You see nondeterminism here? No, it's deterministic construction of a list. Run it twice, you'll get the same numbers in the same order.
You can describe it this way: Choose arbitrary x from [4,5]. Choose arbitrary y from [0,1]. Return x+y. Collect all possible results.
That way seems to involve nondeterminism, but it's only a nested loop (list comprehension). There is no "real" nondeterminism here, it's simulated by checking all possibilities. Nondeterminism is an illusion. The code only appears to be nondeterministic.
This code using State monad:
do put 0
x <- get
put (x+2)
y <- get
return (y+3)
gives 5 and seems to involve changing state. As with lists it's an illusion. There are no "variables" that change (as in imperative languages). Everything is nonmutable under the hood.
You can describe the code this way: put 0 to a variable. Read the value of a variable to x. Put (x+2) to the variable. Read the variable to y, and return y+3.
That way seems to involve state, but it's only composing functions passing additional parameter. There is no "real" mutability here, it's simulated by composition. Mutability is an illusion. The code only appears to be using it.
Haskell does it this way: you've got functions
a -> s -> (b,s)
This function takes and old value of state and returns new value. It does not involve mutability or change variables. It's a function in mathematical sense.
For example the function "put" takes new value of state, ignores current state and returns new state:
put x _ = ((), x)
Just like you can compose two normal functions
a -> b
b -> c
into
a -> c
using (.) operator you can compose "state" transformers
a -> s -> (b,s)
b -> s -> (c,s)
into a single function
a -> s -> (c,s)
Try writing the composition operator yourself. This is what really happens, there are no "side effects" only passing arguments to functions.
From what I understand, Monad is just another typeclass that declares ways to interact with data [...]
...providing an interface common to all those types which have an instance. This can then be used to provide generic definitions which work across all monadic types.
It seems clever and clean to implement these 3 things with one concept [...]
...the only three things that are implemented are the instances for those three types (list, Maybe and IO) - the types themselves are defined independently elsewhere.
[...] but really, the point is so there can be clean error handling in a chain of functions, containers, and side effects.
Not just error handling e.g. consider ST - without the monadic interface, you would have to pass the encapsulated-state directly and correctly...a tiresome task.
How exactly is the problem of side-effects solved?
Short answer: Haskell solves manages them by using types to indicate their presence.
Can someone explain how, intuitively, Haskell gets away with changing state with inputs and output?
"Intuitively"...like what's available over here? Let's try a simple direct comparison instead:
From How to Declare an Imperative by Philip Wadler:
(* page 26 *)
type 'a io = unit -> 'a
infix >>=
val >>= : 'a io * ('a -> 'b io) -> 'b io
fun m >>= k = fn () => let
val x = m ()
val y = k x ()
in
y
end
val return : 'a -> 'a io
fun return x = fn () => x
val putc : char -> unit io
fun putc c = fn () => putcML c
val getc : char io
val getc = fn () => getcML ()
fun getcML () =
valOf(TextIO.input1(TextIO.stdIn))
(* page 25 *)
fun putcML c =
TextIO.output1(TextIO.stdOut,c)
Based on these two answers of mine, this is my Haskell translation:
type IO a = OI -> a
(>>=) :: IO a -> (a -> IO b) -> IO b
m >>= k = \ u -> let !(u1, u2) = part u in
let !x = m u1 in
let !y = k x u2 in
y
return :: a -> IO a
return x = \ u -> let !_ = part u in x
putc :: Char -> IO ()
putc c = \ u -> putcOI c u
getc :: IO Char
getc = \ u -> getcOI u
-- primitives
data OI
partOI :: OI -> (OI, OI)
putcOI :: Char -> OI -> ()
getcOI :: OI -> Char
Now remember that short answer about side-effects?
Haskell manages them by using types to indicate their presence.
Data.Char.chr :: Int -> Char -- no side effects
getChar :: IO Char -- side effects at
{- :: OI -> Char -} -- work: beware!

Resources