Consider the following example.
newtype TooBig = TooBig Int deriving Show
choose :: MonadPlus m => [a] -> m a
choose = msum . map return
ex1 :: (MonadPlus m, MonadError TooBig m) => m Int
ex1 = do
x <- choose [5,7,1]
if x > 5
then throwError (TooBig x)
else return x
ex2 :: (MonadPlus m, MonadError TooBig m) => m Int
ex2 = ex1 `catchError` handler
where
handler (TooBig x) = if x > 7
then throwError (TooBig x)
else return x
ex3 :: Either TooBig [Int]
ex3 = runIdentity . runExceptT . runListT $ ex2
What should the value of ex3 be? If we use MTL then the answer is Right [7] which makes sense because ex1 is terminated since it throws an error, and the handler simply returns the pure value return 7 which is Right [7].
However, in the paper “Extensible Effects: An Alternative to Monad Transformers” by Oleg Kiselyov, et al. the authors say that this is “a surprising and undesirable result.” They expected the result to be Right [5,7,1] because the handler recovers from the exception by not re-throwing it. Essentially, they expected the catchError to be moved into ex1 as follows.
newtype TooBig = TooBig Int deriving Show
choose :: MonadPlus m => [a] -> m a
choose = msum . map return
ex1 :: (MonadPlus m, MonadError TooBig m) => m Int
ex1 = do
x <- choose [5,7,1]
if x > 5
then throwError (TooBig x) `catchError` handler
else return x
where
handler (TooBig x) = if x > 7
then throwError (TooBig x)
else return x
ex3 :: Either TooBig [Int]
ex3 = runIdentity . runExceptT . runListT $ ex1
Indeed, this is what extensible effects do. They change the semantics of the program by moving the effect handlers closer to the effect source. For example, local is moved closer to ask and catchError is moved closer to throwError. The authors of the paper tout this as one of the advantages of extensible effects over monad transformers, claiming that monad transformers have “inflexible semantics”.
But, what if I want the result to be Right [7] instead of Right [5,7,1] for whatever reason? As shown in the examples above, monad transformers can be used to get both results. However, because extensible effects always seem to move effect handlers closer to the effect source, it seems impossible to get the result Right [7].
So, the question is how to get the “inflexible semantics of monad transformers” using extensible effects? Is it possible to prevent individual effect handlers from moving closer to the effect source when using extensible effects? If not, then is this a limitation of extensible effects that needs to be addressed?
I'm also a little confused about the nuance in those excerpts from that particular paper. I think it's more useful to take a few steps back and to explain the motivations behind the enterprise of algebraic effects, to which that paper belongs.
The MTL approach is in some sense the most obvious and general: you have an interface (or "effect"), put it in a type class and call it a day. The cost of that generality is that it is unprincipled: you don't know what happens when you combine interfaces together. This issue appears most concretely when you implement an interface: you must implement all of them simultaneously. We like to think that each interface can be implemented in isolation in a dedicated transformer, but if you have two interfaces, say MonadPlus and MonadError, implemented by transformers ListT and ExceptT, in order to compose them, you will also have to either implement MonadError for ListT or MonadPlus for ExceptT. This O(n^2) instance problem is popularly understood as "just boilerplate", but the deeper issue is that if we allow interfaces to be of any shape, there is no telling what danger could hide in that "boilerplate", if it can even be implemented at all.
We must put more structure on those interfaces. For some definition of "lift" (lift from MonadTrans), the effects we can lift uniformly through transformers are exactly the algebraic effects. (See also, Monad Transformers and Modular Algebraic Effects, What Binds Them Together.)
This is not truly a restriction. While some interfaces are not algebraic in a technical sense, such as MonadError (because of catch), they can usually still be expressed within the framework of algebraic effects, just less literally. While restricting the definition of an "interface", we also gain richer ways of using them.
So I think algebraic effects are a different, more precise way of thinking about interfaces before all. As a way of thinking, it can thus be adopted without changing anything about your code, which is why comparisons tend to look at the same code twice and it is difficult to see the point without having a grasp on the surrounding context and motivations. If you think the O(n^2) instances problem is a trivial "boilerplate" problem, you already believe in the principle that interfaces ought to be composable; algebraic effects are one way of explicitly designing libraries and languages around that principle.
"Algebraic effects" are a fuzzy notion without a fixed definition. Nowadays they are most recognizable by syntax featuring a call and a handle construct (or op/perform/throw/raise and catch/match). call is the one construct to use interfaces and handle is how we implement them. The idea common to such languages is that there are equations (hence "algebraic") that provide a basic intuition of how call and handle behave in a way that's independent of the interface, notably via the interaction of handle with sequential composition (>>=).
Semantically, the meaning of a program can be denoted by a tree of calls, and a handle is a transformation of such trees. That's why many incarnations of "algebraic effects" in Haskell start with free monads, types of trees parameterized by the type of nodes f:
data Free f a
= Pure a
| Free (f (Free f a))
From that point of view, the program ex2 is a tree with three branches, with the branch labeled 7 ending in an exception:
ex2 :: Free ([] :+: Const Int) Int -- The functor "Const e" models exceptions (the equivalent of "MonadError e")
ex2 = Free [Pure 5, Free (Const 7), Pure 1]
-- You can write this with do notation to look like the original ex2, I'd say "it's just notation".
-- NB: constructors for (:+:) omitted
And each of the effects [] and Const Int corresponds to some way of transforming the tree, eliminating that effect from the tree (possibly introducing others, including itself).
"Catching" an exception corresponds to handling the Const effect by converting Free (Const x) nodes into some new tree h x.
To handle the [] effect, one way is to compose all children of a Free [...] node using (>>=), collecting their results in a final list. This can be seen as a generalization of depth-first search.
You get the result [7] or [5,7,1] depending on how those transformations are ordered.
Of course, there is a correspondence to the two orders of monad transformers in a MTL approach, but that intuition of programs as trees, which is generally applicable to all algebraic effects, is not as obvious when you're in the middle of implementing an instance such as MonadError e for ListT. That intuition might make sense a posteriori, but it is a priori obfuscated because type class instances are not first-class values like handlers, and monad transformers are typically expressed in terms of the final interpretation (hidden in the monad m they transform) instead of the initial syntax.
From the document:
when :: (Monad m) => Bool -> m () -> m ()
when p s = if p then s else return ()
The when function takes a boolean argument and a monadic computation with unit () type and performs the computation only when the boolean argument is True.
===
As a Haskell newbie, my problem is that for me m () is some "void" data, but here the document mentions it as computation. Is it because of the laziness of Haskell?
Laziness has no part in it.
The m/Monad part is often called computation.
The best example might be m = IO:
Look at putStrLn "Hello" :: IO () - This is a computation that, when run, will print "Hello" to your screen.
This computation has no result - so the return type is ()
Now when you write
hello :: Bool -> IO ()
hello sayIt =
when sayIt (putStrLn "Hello")
then hello True is a computation that, when run, will print "Hello"; while hello False is a computation that when run will do nothing at all.
Now compare it to getLine :: IO String - this is a computation that, when run, will prompt you for an input and will return the input as a String - that's why the return type is String.
Does this help?
for me "m ()" is some "void" data
And that kinda makes sense, in that a computation is a special kind of data. It has nothing to do with laziness - it's associated with context.
Let's take State as an example. A function of type, say, s -> () in Haskell can only produce one value. However, a function of type s -> ((), s) is a regular function doing some transformation on s. The problem you're having is that you're only looking at the () part, while the s -> s part stays hidden. That's the point of State - to hide the state passing.
Hence State s () is trivially convertible to s -> ((), s) and back, and it still is a Monad (a computation) that produces a value of... ().
If we look at practical use, now:
(flip runState 10) $ do
modify (+1)
This expression produces a tuple of ((), Int); the Int part is hidden
It will modify the state, adding 1 to it. It produces the intermediate value of (), though, which fits your when:
when (5 > 3) $ modify (+1)
Monads are notably abstract and mathematical, so intuitive statements about them are often made in language that is rather vague. So values of a monadic type are often informally labeled as "computations," "actions" or (less often) "commands" because it's an analogy that sometimes help us reason about them. But when you dig deeper, these turn out to be empty words when used this way; ultimately what they mean is "some value of a type that provides the Monad interface."
I like the word "action" better for this, so let me go with that. The intuition for the use for that word in Haskell is this: the language makes a distinction between functions and actions:
Functions can't have any side effects, and their types look like a -> b.
Actions may have side effects, and their types look like IO a.
A consequence of this: an action of type IO () produces an uninteresting result value, and therefore it's either a no-op (return ()) or an action that is only interesting because of its side effects.
Monad then is the interface that allows you to glue actions together into complex actions.
This is all very intuitive, but it's an analogy that becomes rather stretched when you try to apply it to many monads other than the IO type. For example, lists are a monad:
instance Monad [] where
return a = [a]
as >>= f = concatMap f as
The "actions" or "computations" of the list monad are... lists. How is a list an "action" or a "computation"? The analogy is rather weak in this case, isn't it?
So I'd say that this is the best advice:
Understand that "action" and "computation" are analogies. There's no strict definition.
Understand that these analogies are stronger for some monad instances, and weak for others.
The ultimate barometer of how things work are the Monad laws and the definitions of the various functions that work with Monad.
In my humble opinion the answers to the famous question "What is a monad?", especially the most voted ones, try to explain what is a monad without clearly explaining why monads are really necessary. Can they be explained as the solution to a problem?
Why do we need monads?
We want to program only using functions. ("functional programming (FP)" after all).
Then, we have a first big problem. This is a program:
f(x) = 2 * x
g(x,y) = x / y
How can we say what is to be executed first? How can we form an ordered sequence of functions (i.e. a program) using no more than functions?
Solution: compose functions. If you want first g and then f, just write f(g(x,y)). This way, "the program" is a function as well: main = f(g(x,y)). OK, but ...
More problems: some functions might fail (i.e. g(2,0), divide by 0). We have no "exceptions" in FP (an exception is not a function). How do we solve it?
Solution: Let's allow functions to return two kind of things: instead of having g : Real,Real -> Real (function from two reals into a real), let's allow g : Real,Real -> Real | Nothing (function from two reals into (real or nothing)).
But functions should (to be simpler) return only one thing.
Solution: let's create a new type of data to be returned, a "boxing type" that encloses maybe a real or be simply nothing. Hence, we can have g : Real,Real -> Maybe Real. OK, but ...
What happens now to f(g(x,y))? f is not ready to consume a Maybe Real. And, we don't want to change every function we could connect with g to consume a Maybe Real.
Solution: let's have a special function to "connect"/"compose"/"link" functions. That way, we can, behind the scenes, adapt the output of one function to feed the following one.
In our case: g >>= f (connect/compose g to f). We want >>= to get g's output, inspect it and, in case it is Nothing just don't call f and return Nothing; or on the contrary, extract the boxed Real and feed f with it. (This algorithm is just the implementation of >>= for the Maybe type). Also note that >>= must be written only once per "boxing type" (different box, different adapting algorithm).
Many other problems arise which can be solved using this same pattern: 1. Use a "box" to codify/store different meanings/values, and have functions like g that return those "boxed values". 2. Have a composer/linker g >>= f to help connecting g's output to f's input, so we don't have to change any f at all.
Remarkable problems that can be solved using this technique are:
having a global state that every function in the sequence of functions ("the program") can share: solution StateMonad.
We don't like "impure functions": functions that yield different output for same input. Therefore, let's mark those functions, making them to return a tagged/boxed value: IO monad.
Total happiness!
The answer is, of course, "We don't". As with all abstractions, it isn't necessary.
Haskell does not need a monad abstraction. It isn't necessary for performing IO in a pure language. The IO type takes care of that just fine by itself. The existing monadic desugaring of do blocks could be replaced with desugaring to bindIO, returnIO, and failIO as defined in the GHC.Base module. (It's not a documented module on hackage, so I'll have to point at its source for documentation.) So no, there's no need for the monad abstraction.
So if it's not needed, why does it exist? Because it was found that many patterns of computation form monadic structures. Abstraction of a structure allows for writing code that works across all instances of that structure. To put it more concisely - code reuse.
In functional languages, the most powerful tool found for code reuse has been composition of functions. The good old (.) :: (b -> c) -> (a -> b) -> (a -> c) operator is exceedingly powerful. It makes it easy to write tiny functions and glue them together with minimal syntactic or semantic overhead.
But there are cases when the types don't work out quite right. What do you do when you have foo :: (b -> Maybe c) and bar :: (a -> Maybe b)? foo . bar doesn't typecheck, because b and Maybe b aren't the same type.
But... it's almost right. You just want a bit of leeway. You want to be able to treat Maybe b as if it were basically b. It's a poor idea to just flat-out treat them as the same type, though. That's more or less the same thing as null pointers, which Tony Hoare famously called the billion-dollar mistake. So if you can't treat them as the same type, maybe you can find a way to extend the composition mechanism (.) provides.
In that case, it's important to really examine the theory underlying (.). Fortunately, someone has already done this for us. It turns out that the combination of (.) and id form a mathematical construct known as a category. But there are other ways to form categories. A Kleisli category, for instance, allows the objects being composed to be augmented a bit. A Kleisli category for Maybe would consist of (.) :: (b -> Maybe c) -> (a -> Maybe b) -> (a -> Maybe c) and id :: a -> Maybe a. That is, the objects in the category augment the (->) with a Maybe, so (a -> b) becomes (a -> Maybe b).
And suddenly, we've extended the power of composition to things that the traditional (.) operation doesn't work on. This is a source of new abstraction power. Kleisli categories work with more types than just Maybe. They work with every type that can assemble a proper category, obeying the category laws.
Left identity: id . f = f
Right identity: f . id = f
Associativity: f . (g . h) = (f . g) . h
As long as you can prove that your type obeys those three laws, you can turn it into a Kleisli category. And what's the big deal about that? Well, it turns out that monads are exactly the same thing as Kleisli categories. Monad's return is the same as Kleisli id. Monad's (>>=) isn't identical to Kleisli (.), but it turns out to be very easy to write each in terms of the other. And the category laws are the same as the monad laws, when you translate them across the difference between (>>=) and (.).
So why go through all this bother? Why have a Monad abstraction in the language? As I alluded to above, it enables code reuse. It even enables code reuse along two different dimensions.
The first dimension of code reuse comes directly from the presence of the abstraction. You can write code that works across all instances of the abstraction. There's the entire monad-loops package consisting of loops that work with any instance of Monad.
The second dimension is indirect, but it follows from the existence of composition. When composition is easy, it's natural to write code in small, reusable chunks. This is the same way having the (.) operator for functions encourages writing small, reusable functions.
So why does the abstraction exist? Because it's proven to be a tool that enables more composition in code, resulting in creating reusable code and encouraging the creation of more reusable code. Code reuse is one of the holy grails of programming. The monad abstraction exists because it moves us a little bit towards that holy grail.
Benjamin Pierce said in TAPL
A type system can be regarded as calculating a kind of static
approximation to the run-time behaviours of the terms in a program.
That's why a language equipped with a powerful type system is strictly more expressive, than a poorly typed language. You can think about monads in the same way.
As #Carl and sigfpe point, you can equip a datatype with all operations you want without resorting to monads, typeclasses or whatever other abstract stuff. However monads allow you not only to write reusable code, but also to abstract away all redundant detailes.
As an example, let's say we want to filter a list. The simplest way is to use the filter function: filter (> 3) [1..10], which equals [4,5,6,7,8,9,10].
A slightly more complicated version of filter, that also passes an accumulator from left to right, is
swap (x, y) = (y, x)
(.*) = (.) . (.)
filterAccum :: (a -> b -> (Bool, a)) -> a -> [b] -> [b]
filterAccum f a xs = [x | (x, True) <- zip xs $ snd $ mapAccumL (swap .* f) a xs]
To get all i, such that i <= 10, sum [1..i] > 4, sum [1..i] < 25, we can write
filterAccum (\a x -> let a' = a + x in (a' > 4 && a' < 25, a')) 0 [1..10]
which equals [3,4,5,6].
Or we can redefine the nub function, that removes duplicate elements from a list, in terms of filterAccum:
nub' = filterAccum (\a x -> (x `notElem` a, x:a)) []
nub' [1,2,4,5,4,3,1,8,9,4] equals [1,2,4,5,3,8,9]. A list is passed as an accumulator here. The code works, because it's possible to leave the list monad, so the whole computation stays pure (notElem doesn't use >>= actually, but it could). However it's not possible to safely leave the IO monad (i.e. you cannot execute an IO action and return a pure value — the value always will be wrapped in the IO monad). Another example is mutable arrays: after you have leaved the ST monad, where a mutable array live, you cannot update the array in constant time anymore. So we need a monadic filtering from the Control.Monad module:
filterM :: (Monad m) => (a -> m Bool) -> [a] -> m [a]
filterM _ [] = return []
filterM p (x:xs) = do
flg <- p x
ys <- filterM p xs
return (if flg then x:ys else ys)
filterM executes a monadic action for all elements from a list, yielding elements, for which the monadic action returns True.
A filtering example with an array:
nub' xs = runST $ do
arr <- newArray (1, 9) True :: ST s (STUArray s Int Bool)
let p i = readArray arr i <* writeArray arr i False
filterM p xs
main = print $ nub' [1,2,4,5,4,3,1,8,9,4]
prints [1,2,4,5,3,8,9] as expected.
And a version with the IO monad, which asks what elements to return:
main = filterM p [1,2,4,5] >>= print where
p i = putStrLn ("return " ++ show i ++ "?") *> readLn
E.g.
return 1? -- output
True -- input
return 2?
False
return 4?
False
return 5?
True
[1,5] -- output
And as a final illustration, filterAccum can be defined in terms of filterM:
filterAccum f a xs = evalState (filterM (state . flip f) xs) a
with the StateT monad, that is used under the hood, being just an ordinary datatype.
This example illustrates, that monads not only allow you to abstract computational context and write clean reusable code (due to the composability of monads, as #Carl explains), but also to treat user-defined datatypes and built-in primitives uniformly.
I don't think IO should be seen as a particularly outstanding monad, but it's certainly one of the more astounding ones for beginners, so I'll use it for my explanation.
Naïvely building an IO system for Haskell
The simplest conceivable IO system for a purely-functional language (and in fact the one Haskell started out with) is this:
main₀ :: String -> String
main₀ _ = "Hello World"
With lazyness, that simple signature is enough to actually build interactive terminal programs – very limited, though. Most frustrating is that we can only output text. What if we added some more exciting output possibilities?
data Output = TxtOutput String
| Beep Frequency
main₁ :: String -> [Output]
main₁ _ = [ TxtOutput "Hello World"
-- , Beep 440 -- for debugging
]
cute, but of course a much more realistic “alterative output” would be writing to a file. But then you'd also want some way to read from files. Any chance?
Well, when we take our main₁ program and simply pipe a file to the process (using operating system facilities), we have essentially implemented file-reading. If we could trigger that file-reading from within the Haskell language...
readFile :: Filepath -> (String -> [Output]) -> [Output]
This would use an “interactive program” String->[Output], feed it a string obtained from a file, and yield a non-interactive program that simply executes the given one.
There's one problem here: we don't really have a notion of when the file is read. The [Output] list sure gives a nice order to the outputs, but we don't get an order for when the inputs will be done.
Solution: make input-events also items in the list of things to do.
data IO₀ = TxtOut String
| TxtIn (String -> [Output])
| FileWrite FilePath String
| FileRead FilePath (String -> [Output])
| Beep Double
main₂ :: String -> [IO₀]
main₂ _ = [ FileRead "/dev/null" $ \_ ->
[TxtOutput "Hello World"]
]
Ok, now you may spot an imbalance: you can read a file and make output dependent on it, but you can't use the file contents to decide to e.g. also read another file. Obvious solution: make the result of the input-events also something of type IO, not just Output. That sure includes simple text output, but also allows reading additional files etc..
data IO₁ = TxtOut String
| TxtIn (String -> [IO₁])
| FileWrite FilePath String
| FileRead FilePath (String -> [IO₁])
| Beep Double
main₃ :: String -> [IO₁]
main₃ _ = [ TxtIn $ \_ ->
[TxtOut "Hello World"]
]
That would now actually allow you to express any file operation you might want in a program (though perhaps not with good performance), but it's somewhat overcomplicated:
main₃ yields a whole list of actions. Why don't we simply use the signature :: IO₁, which has this as a special case?
The lists don't really give a reliable overview of program flow anymore: most subsequent computations will only be “announced” as the result of some input operation. So we might as well ditch the list structure, and simply cons a “and then do” to each output operation.
data IO₂ = TxtOut String IO₂
| TxtIn (String -> IO₂)
| Terminate
main₄ :: IO₂
main₄ = TxtIn $ \_ ->
TxtOut "Hello World"
Terminate
Not too bad!
So what has all of this to do with monads?
In practice, you wouldn't want to use plain constructors to define all your programs. There would need to be a good couple of such fundamental constructors, yet for most higher-level stuff we would like to write a function with some nice high-level signature. It turns out most of these would look quite similar: accept some kind of meaningfully-typed value, and yield an IO action as the result.
getTime :: (UTCTime -> IO₂) -> IO₂
randomRIO :: Random r => (r,r) -> (r -> IO₂) -> IO₂
findFile :: RegEx -> (Maybe FilePath -> IO₂) -> IO₂
There's evidently a pattern here, and we'd better write it as
type IO₃ a = (a -> IO₂) -> IO₂ -- If this reminds you of continuation-passing
-- style, you're right.
getTime :: IO₃ UTCTime
randomRIO :: Random r => (r,r) -> IO₃ r
findFile :: RegEx -> IO₃ (Maybe FilePath)
Now that starts to look familiar, but we're still only dealing with thinly-disguised plain functions under the hood, and that's risky: each “value-action” has the responsibility of actually passing on the resulting action of any contained function (else the control flow of the entire program is easily disrupted by one ill-behaved action in the middle). We'd better make that requirement explicit. Well, it turns out those are the monad laws, though I'm not sure we can really formulate them without the standard bind/join operators.
At any rate, we've now reached a formulation of IO that has a proper monad instance:
data IO₄ a = TxtOut String (IO₄ a)
| TxtIn (String -> IO₄ a)
| TerminateWith a
txtOut :: String -> IO₄ ()
txtOut s = TxtOut s $ TerminateWith ()
txtIn :: IO₄ String
txtIn = TxtIn $ TerminateWith
instance Functor IO₄ where
fmap f (TerminateWith a) = TerminateWith $ f a
fmap f (TxtIn g) = TxtIn $ fmap f . g
fmap f (TxtOut s c) = TxtOut s $ fmap f c
instance Applicative IO₄ where
pure = TerminateWith
(<*>) = ap
instance Monad IO₄ where
TerminateWith x >>= f = f x
TxtOut s c >>= f = TxtOut s $ c >>= f
TxtIn g >>= f = TxtIn $ (>>=f) . g
Obviously this is not an efficient implementation of IO, but it's in principle usable.
Monads serve basically to compose functions together in a chain. Period.
Now the way they compose differs across the existing monads, thus resulting in different behaviors (e.g., to simulate mutable state in the state monad).
The confusion about monads is that being so general, i.e., a mechanism to compose functions, they can be used for many things, thus leading people to believe that monads are about state, about IO, etc, when they are only about "composing functions".
Now, one interesting thing about monads, is that the result of the composition is always of type "M a", that is, a value inside an envelope tagged with "M". This feature happens to be really nice to implement, for example, a clear separation between pure from impure code: declare all impure actions as functions of type "IO a" and provide no function, when defining the IO monad, to take out the "a" value from inside the "IO a". The result is that no function can be pure and at the same time take out a value from an "IO a", because there is no way to take such value while staying pure (the function must be inside the "IO" monad to use such value). (NOTE: well, nothing is perfect, so the "IO straitjacket" can be broken using "unsafePerformIO : IO a -> a" thus polluting what was supposed to be a pure function, but this should be used very sparingly and when you really know to be not introducing any impure code with side-effects.
Monads are just a convenient framework for solving a class of recurring problems. First, monads must be functors (i.e. must support mapping without looking at the elements (or their type)), they must also bring a binding (or chaining) operation and a way to create a monadic value from an element type (return). Finally, bind and return must satisfy two equations (left and right identities), also called the monad laws. (Alternatively one could define monads to have a flattening operation instead of binding.)
The list monad is commonly used to deal with non-determinism. The bind operation selects one element of the list (intuitively all of them in parallel worlds), lets the programmer to do some computation with them, and then combines the results in all worlds to single list (by concatenating, or flattening, a nested list). Here is how one would define a permutation function in the monadic framework of Haskell:
perm [e] = [[e]]
perm l = do (leader, index) <- zip l [0 :: Int ..]
let shortened = take index l ++ drop (index + 1) l
trailer <- perm shortened
return (leader : trailer)
Here is an example repl session:
*Main> perm "a"
["a"]
*Main> perm "ab"
["ab","ba"]
*Main> perm ""
[]
*Main> perm "abc"
["abc","acb","bac","bca","cab","cba"]
It should be noted that the list monad is in no way a side effecting computation. A mathematical structure being a monad (i.e. conforming to the above mentioned interfaces and laws) does not imply side effects, though side-effecting phenomena often nicely fit into the monadic framework.
You need monads if you have a type constructor and functions that returns values of that type family. Eventually, you would like to combine these kind of functions together. These are the three key elements to answer why.
Let me elaborate. You have Int, String and Real and functions of type Int -> String, String -> Real and so on. You can combine these functions easily, ending with Int -> Real. Life is good.
Then, one day, you need to create a new family of types. It could be because you need to consider the possibility of returning no value (Maybe), returning an error (Either), multiple results (List) and so on.
Notice that Maybe is a type constructor. It takes a type, like Int and returns a new type Maybe Int. First thing to remember, no type constructor, no monad.
Of course, you want to use your type constructor in your code, and soon you end with functions like Int -> Maybe String and String -> Maybe Float. Now, you can't easily combine your functions. Life is not good anymore.
And here's when monads come to the rescue. They allow you to combine that kind of functions again. You just need to change the composition . for >==.
Why do we need monadic types?
Since it was the quandary of I/O and its observable effects in nonstrict languages like Haskell that brought the monadic interface to such prominence:
[...] monads are used to address the more general problem of computations (involving state, input/output, backtracking, ...) returning values: they do not solve any input/output-problems directly but rather provide an elegant and flexible abstraction of many solutions to related problems. [...] For instance, no less than three different input/output-schemes are used to solve these basic problems in Imperative functional programming, the paper which originally proposed `a new model, based on monads, for performing input/output in a non-strict, purely functional language'. [...]
[Such] input/output-schemes merely provide frameworks in which side-effecting operations can safely be used with a guaranteed order of execution and without affecting the properties of the purely functional parts of the language.
Claus Reinke (pages 96-97 of 210).
(emphasis by me.)
[...] When we write effectful code – monads or no monads – we have to constantly keep in mind the context of expressions we pass around.
The fact that monadic code ‘desugars’ (is implementable in terms of) side-effect-free code is irrelevant. When we use monadic notation, we program within that notation – without considering what this notation desugars into. Thinking of the desugared code breaks the monadic abstraction. A side-effect-free, applicative code is normally compiled to (that is, desugars into) C or machine code. If the desugaring argument has any force, it may be applied just as well to the applicative code, leading to the conclusion that it all boils down to the machine code and hence all programming is imperative.
[...] From the personal experience, I have noticed that the mistakes I make when writing monadic code are exactly the mistakes I made when programming in C. Actually, monadic mistakes tend to be worse, because monadic notation (compared to that of a typical imperative language) is ungainly and obscuring.
Oleg Kiselyov (page 21 of 26).
The most difficult construct for students to understand is the monad. I introduce IO without mentioning monads.
Olaf Chitil.
More generally:
Still, today, over 25 years after the introduction of the concept of monads to the world of functional programming, beginning functional programmers struggle to grasp the concept of monads. This struggle is exemplified by the numerous blog posts about the effort of trying to learn about monads. From our own experience we notice that even at university level, bachelor level students often struggle to comprehend monads and consistently score poorly on monad-related exam questions.
Considering that the concept of monads is not likely to disappear from the functional programming landscape any time soon, it is vital that we, as the functional programming community, somehow overcome the problems novices encounter when first studying monads.
Tim Steenvoorden, Jurriën Stutterheim, Erik Barendsen and Rinus Plasmeijer.
If only there was another way to specify "a guaranteed order of execution" in Haskell, while keeping the ability to separate regular Haskell definitions from those involved in I/O (and its observable effects) - translating this variation of Philip Wadler's echo:
val echoML : unit -> unit
fun echoML () = let val c = getcML () in
if c = #"\n" then
()
else
let val _ = putcML c in
echoML ()
end
fun putcML c = TextIO.output1(TextIO.stdOut,c);
fun getcML () = valOf(TextIO.input1(TextIO.stdIn));
...could then be as simple as:
echo :: OI -> ()
echo u = let !(u1:u2:u3:_) = partsOI u in
let !c = getChar u1 in
if c == '\n' then
()
else
let !_ = putChar c u2 in
echo u3
where:
data OI -- abstract
foreign import ccall "primPartOI" partOI :: OI -> (OI, OI)
⋮
foreign import ccall "primGetCharOI" getChar :: OI -> Char
foreign import ccall "primPutCharOI" putChar :: Char -> OI -> ()
⋮
and:
partsOI :: OI -> [OI]
partsOI u = let !(u1, u2) = partOI u in u1 : partsOI u2
How would this work? At run-time, Main.main receives an initial OI pseudo-data value as an argument:
module Main(main) where
main :: OI -> ()
⋮
...from which other OI values are produced, using partOI or partsOI. All you have to do is ensure each new OI value is used at most once, in each call to an OI-based definition, foreign or otherwise. In return, you get back a plain ordinary result - it isn't e.g. paired with some odd abstract state, or requires the use of a callback continuation, etc.
Using OI, instead of the unit type () like Standard ML does, means we can avoid always having to use the monadic interface:
Once you're in the IO monad, you're stuck there forever, and are reduced to Algol-style imperative programming.
Robert Harper.
But if you really do need it:
type IO a = OI -> a
unitIO :: a -> IO a
unitIO x = \ u -> let !_ = partOI u in x
bindIO :: IO a -> (a -> IO b) -> IO b
bindIO m k = \ u -> let !(u1, u2) = partOI u in
let !x = m u1 in
let !y = k x u2 in
y
⋮
So, monadic types aren't always needed - there are other interfaces out there:
LML had a fully fledged implementation of oracles running of a multi-processor (a Sequent Symmetry) back in ca 1989. The description in the Fudgets thesis refers to this implementation. It was fairly pleasant to work with and quite practical.
[...]
These days everything is done with monads so other solutions are sometimes forgotten.
Lennart Augustsson (2006).
Wait a moment: since it so closely resembles Standard ML's direct use of effects, is this approach and its use of pseudo-data referentially transparent?
Absolutely - just find a suitable definition of "referential transparency"; there's plenty to choose from...
After touching on Monads in respect to functional programming, does the feature actually make a language pure, or is it just another "get out of jail free card" for reasoning of computer systems in the real world, outside of blackboard maths?
EDIT:
This is not flame bait as someone has said in this post, but a genuine question that I am hoping that someone can shoot me down with and say, proof, it is pure.
Also I am looking at the question with respect to other not so pure Functional Languages and some OO languages that use good design and comparing the purity. So far in my very limited world of FP, I have still not groked the Monads purity, you will be pleased to know however I do like the idea of immutability which is far more important in the purity stakes.
Take the following mini-language:
data Action = Get (Char -> Action) | Put Char Action | End
Get f means: read a character c, and perform action f c.
Put c a means: write character c, and perform action a.
Here's a program that prints "xy", then asks for two letters and prints them in reverse order:
Put 'x' (Put 'y' (Get (\a -> Get (\b -> Put b (Put a End)))))
You can manipulate such programs. For example:
conditionally p = Get (\a -> if a == 'Y' then p else End)
This is has type Action -> Action - it takes a program and gives another program that asks for confirmation first. Here's another:
printString = foldr Put End
This has type String -> Action - it takes a string and returns a program that writes the string, like
Put 'h' (Put 'e' (Put 'l' (Put 'l' (Put 'o' End)))).
IO in Haskell works similarily. Although executing it requires performing side effects, you can build complex programs without executing them, in a pure way. You're computing on descriptions of programs (IO actions), and not actually performing them.
In language like C you can write a function void execute(Action a) that actually executed the program. In Haskell you specify that action by writing main = a. The compiler creates a program that executes the action, but you have no other way to execute an action (aside dirty tricks).
Obviously Get and Put are not only options, you can add many other API calls to the IO data type, like operating on files or concurrency.
Adding a result value
Now consider the following data type.
data IO a = Get (Char -> Action) | Put Char Action | End a
The previous Action type is equivalent to IO (), i.e. an IO value which always returns "unit", comparable to "void".
This type is very similar to Haskell IO, only in Haskell IO is an abstract data type (you don't have access to the definition, only to some methods).
These are IO actions which can end with some result. A value like this:
Get (\x -> if x == 'A' then Put 'B' (End 3) else End 4)
has type IO Int and is corresponding to a C program:
int f() {
char x;
scanf("%c", &x);
if (x == 'A') {
printf("B");
return 3;
} else return 4;
}
Evaluation and execution
There's a difference between evaluating and executing. You can evaluate any Haskell expression, and get a value; for example, evaluate 2+2 :: Int into 4 :: Int. You can execute Haskell expressions only which have type IO a. This might have side-effects; executing Put 'a' (End 3) puts the letter a to screen. If you evaluate an IO value, like this:
if 2+2 == 4 then Put 'A' (End 0) else Put 'B' (End 2)
you get:
Put 'A' (End 0)
But there are no side-effects - you only performed an evaluation, which is harmless.
How would you translate
bool comp(char x) {
char y;
scanf("%c", &y);
if (x > y) { //Character comparison
printf(">");
return true;
} else {
printf("<");
return false;
}
}
into an IO value?
Fix some character, say 'v'. Now comp('v') is an IO action, which compares given character to 'v'. Similarly, comp('b') is an IO action, which compares given character to 'b'. In general, comp is a function which takes a character and returns an IO action.
As a programmer in C, you might argue that comp('b') is a boolean. In C, evaluation and execution are identical (i.e they mean the same thing, or happens simultaneously). Not in Haskell. comp('b') evaluates into some IO action, which after being executed gives a boolean. (Precisely, it evaluates into code block as above, only with 'b' substituted for x.)
comp :: Char -> IO Bool
comp x = Get (\y -> if x > y then Put '>' (End True) else Put '<' (End False))
Now, comp 'b' evaluates into Get (\y -> if 'b' > y then Put '>' (End True) else Put '<' (End False)).
It also makes sense mathematically. In C, int f() is a function. For a mathematician, this doesn't make sense - a function with no arguments? The point of functions is to take arguments. A function int f() should be equivalent to int f. It isn't, because functions in C blend mathematical functions and IO actions.
First class
These IO values are first-class. Just like you can have a list of lists of tuples of integers [[(0,2),(8,3)],[(2,8)]] you can build complex values with IO.
(Get (\x -> Put (toUpper x) (End 0)), Get (\x -> Put (toLower x) (End 0)))
:: (IO Int, IO Int)
A tuple of IO actions: first reads a character and prints it uppercase, second reads a character and returns it lowercase.
Get (\x -> End (Put x (End 0))) :: IO (IO Int)
An IO value, which reads a character x and ends, returning an IO value which writes x to screen.
Haskell has special functions which allow easy manipulation of IO values. For example:
sequence :: [IO a] -> IO [a]
which takes a list of IO actions, and returns an IO action which executes them in sequence.
Monads
Monads are some combinators (like conditionally above), which allow you to write programs more structurally. There's a function that composes of type
IO a -> (a -> IO b) -> IO b
which given IO a, and a function a -> IO b, returns a value of type IO b. If you write first argument as a C function a f() and second argument as b g(a x) it returns a program for g(f(x)). Given above definition of Action / IO, you can write that function yourself.
Notice monads are not essential to purity - you can always write programs as I did above.
Purity
The essential thing about purity is referential transparency, and distinguishing between evaluation and execution.
In Haskell, if you have f x+f x you can replace that with 2*f x. In C, f(x)+f(x) in general is not the same as 2*f(x), since f could print something on the screen, or modify x.
Thanks to purity, a compiler has much more freedom and can optimize better. It can rearrange computations, while in C it has to think if that changes meaning of the program.
It is important to understand that there is nothing inherently special about monads – so they definitely don’t represent a “get out of jail” card in this regard. There is no compiler (or other) magic necessary to implement or use monads, they are defined in the purely functional environment of Haskell. In particular, sdcvvc has shown how to define monads in purely functional manner, without any recourses to implementation backdoors.
What does it mean to reason about computer systems "outside of blackboard maths"? What kind of reasoning would that be? Dead reckoning?
Side-effects and pure functions are a matter of point of view. If we view a nominally side-effecting function as a function taking us from one state of the world to another, it's pure again.
We can make every side-effecting function pure by giving it a second argument, a world, and requiring that it pass us a new world when it is done. I don't know C++ at all anymore but say read has a signature like this:
vector<char> read(filepath_t)
In our new "pure style", we handle it like this:
pair<vector<char>, world_t> read(world_t, filepath_t)
This is in fact how every Haskell IO action works.
So now we've got a pure model of IO. Thank goodness. If we couldn't do that then maybe Lambda Calculus and Turing Machines are not equivalent formalisms and then we'd have some explaining to do. We're not quite done but the two problems left to us are easy:
What goes in the world_t structure? A description of every grain of sand, blade of
grass, broken heart and golden sunset?
We have an informal rule that we use a world only once -- after every IO operation, we
throw away the world we used with it. With all these worlds floating around, though,
we are bound to get them mixed up.
The first problem is easy enough. As long as we do not allow inspection of the world, it turns out we needn't trouble ourselves about storing anything in it. We just need to ensure that a new world is not equal to any previous world (lest the compiler deviously optimize some world-producing operations away, like it sometimes does in C++). There are many ways to handle this.
As for the worlds getting mixed up, we'd like to hide the world passing inside a library so that there's no way to get at the worlds and thus no way to mix them up. Turns out, monads are a great way to hide a "side-channel" in a computation. Enter the IO monad.
Some time ago, a question like yours was asked on the Haskell mailing list and there I went in to the "side-channel" in more detail. Here's the Reddit thread (which links to my original email):
http://www.reddit.com/r/haskell/comments/8bhir/why_the_io_monad_isnt_a_dirty_hack/
I'm very new to functional programming, but here's how I understand it:
In haskell, you define a bunch of functions. These functions don't get executed. They might get evaluated.
There's one function in particular that gets evaluated. This is a constant function that produces a set of "actions." The actions include the evaluation of functions and performing of IO and other "real-world" stuff. You can have functions that create and pass around these actions and they will never be executed unless a function is evaluated with unsafePerformIO or
they are returned by the main function.
So in short, a Haskell program is a function, composed of other functions, that returns an imperative program. The Haskell program itself is pure. Obviously, that imperative program itself can't be. Real-world computers are by definition impure.
There's a lot more to this question and a lot of it is a question of semantics (human, not programming language). Monads are also a bit more abstract than what I've described here. But I think this is a useful way of thinking about it in general.
I think of it like this: Programs have to do something with the outside world to be useful. What's happening (or should be happening) when you write code (in any language) is that you strive to write as much pure, side-effect-free code as possible and corral the IO into specific places.
What we have in Haskell is that you're pushed more in this direction of writing to tightly control effects. In the core and in many libraries there is an enormous amount of pure code as well. Haskell is really all about this. Monads in Haskell are useful for a lot of things. And one thing they've been used for is containment around code that deals with impurity.
This way of designing together with a language that greatly facilitates it has an overall effect of helping us to produce more reliable work, requiring less unit testing to be clear on how it behaves, and allowing more re-use through composition.
If I understand what you're saying correctly, I don't see this as a something fake or only in our minds, like a "get out of jail free card." The benefits here are very real.
For an expanded version of sdcwc's sort of construction of IO, one can look at the IOSpec package on Hackage: http://hackage.haskell.org/package/IOSpec
Is Haskell truly pure?
In the absolute sense of the term: no.
That solid-state Turing machine on which you run your programs - Haskell or otherwise - is a state-and-effect device. For any program to use all of its "features", the program will have to resort to using state and effects.
As for all the other "meanings" ascribed to that pejorative term:
To postulate a state-less model of computation on top of a machinery whose most eminent characteristic is state, seems to be an odd idea, to say the least. The gap between model and machinery is wide, and therefore costly to bridge. No hardware support feature can wash this fact aside: It remains a bad idea for practice.
This has in due time also been recognized by the protagonists of functional languages. They have introduced state (and variables) in various tricky ways. The purely functional character has thereby been compromised and sacrificed. The old terminology has become deceiving.
Niklaus Wirth
Does using monadic types actually make a language pure?
No. It's just one way of using types to demarcate:
definitions that have no visible side-effects at all - values;
definitions that potentially have visible side-effects - actions.
You could instead use uniqueness types, just like Clean does...
Is the use of monadic types just another "get out of jail free card" for reasoning of computer systems in the real world, outside of blackboard maths?
This question is ironic, considering the description of the IO type given in the Haskell 2010 report:
The IO type serves as a tag for operations (actions) that interact with the outside world. The IO type is abstract: no constructors are visible to the user. IO is an instance of the Monad and Functor classes.
...to borrow the parlance of another answer:
[…] IO is magical (having an implementation but no denotation) […]
Being abstract, the IO type is anything but a "get out of jail free card" - intricate models involving multiple semantics are required to account for the workings of I/O in Haskell. For more details, see:
Tackling the Awkward Squad: … by Simon Peyton Jones;
The semantics of fixIO by Levent Erk, John Launchbury and Andrew Moran.
It wasn't always like this - Haskell originally had an I/O mechanism which was at least partially-visible; the last language version to have it was Haskell 1.2. Back then, the type of main was:
main :: [Response] -> [Request]
which was usually abbreviated to:
main :: Dialogue
where:
type Dialogue = [Response] -> [Request]
and Response along with Request were humble, albeit large datatypes:
The advent of I/O using the monadic interface in Haskell changed all that - no more visible datatypes, just an abstract description. As a result, how IO, return, (>>=) etc are really defined is now specific to each implementation of Haskell.
(Why was the old I/O mechanism abandoned? "Tackling the Awkward Squad" gives an overview of its problems.)
These days, the more pertinent question should be:
Is I/O in your implementation of Haskell referentially transparent?
As Owen Stephens notes in Approaches to Functional I/O:
I/O is not a particularly active area of research, but new approaches are still being discovered […]
The Haskell language may yet have a referentially-transparent model for I/O which doesn't attract so much controversy...
No, it isn't. IO monad is impure because it has side effects and mutable state (the race conditions are possible in Haskell programs so ? eh ... pure FP language don't know something like "race condition"). Really pure FP is Clean with uniqueness typing, or Elm with FRP (functional reactive programing) not Haskell. Haskell is one big lie.
Proof :
import Control.Concurrent
import System.IO as IO
import Data.IORef as IOR
import Control.Monad.STM
import Control.Concurrent.STM.TVar
limit = 150000
threadsCount = 50
-- Don't talk about purity in Haskell when we have race conditions
-- in unlocked memory ... PURE language don't need LOCKING because
-- there isn't any mutable state or another side effects !!
main = do
hSetBuffering stdout NoBuffering
putStr "Lock counter? : "
a <- getLine
if a == "y" || a == "yes" || a == "Yes" || a == "Y"
then withLocking
else noLocking
noLocking = do
counter <- newIORef 0
let doWork =
mapM_ (\_ -> IOR.modifyIORef counter (\x -> x + 1)) [1..limit]
threads <- mapM (\_ -> forkIO doWork) [1..threadsCount]
-- Sorry, it's dirty but time is expensive ...
threadDelay (15 * 1000 * 1000)
val <- IOR.readIORef counter
IO.putStrLn ("It may be " ++ show (threadsCount * limit) ++
" but it is " ++ show val)
withLocking = do
counter <- atomically (newTVar 0)
let doWork =
mapM_ (\_ -> atomically $ modifyTVar counter (\x ->
x + 1)) [1..limit]
threads <- mapM (\_ -> forkIO doWork) [1..threadsCount]
threadDelay (15 * 1000 * 1000)
val <- atomically $ readTVar counter
IO.putStrLn ("It may be " ++ show (threadsCount * limit) ++
" but it is " ++ show val)