Related
In my humble opinion the answers to the famous question "What is a monad?", especially the most voted ones, try to explain what is a monad without clearly explaining why monads are really necessary. Can they be explained as the solution to a problem?
Why do we need monads?
We want to program only using functions. ("functional programming (FP)" after all).
Then, we have a first big problem. This is a program:
f(x) = 2 * x
g(x,y) = x / y
How can we say what is to be executed first? How can we form an ordered sequence of functions (i.e. a program) using no more than functions?
Solution: compose functions. If you want first g and then f, just write f(g(x,y)). This way, "the program" is a function as well: main = f(g(x,y)). OK, but ...
More problems: some functions might fail (i.e. g(2,0), divide by 0). We have no "exceptions" in FP (an exception is not a function). How do we solve it?
Solution: Let's allow functions to return two kind of things: instead of having g : Real,Real -> Real (function from two reals into a real), let's allow g : Real,Real -> Real | Nothing (function from two reals into (real or nothing)).
But functions should (to be simpler) return only one thing.
Solution: let's create a new type of data to be returned, a "boxing type" that encloses maybe a real or be simply nothing. Hence, we can have g : Real,Real -> Maybe Real. OK, but ...
What happens now to f(g(x,y))? f is not ready to consume a Maybe Real. And, we don't want to change every function we could connect with g to consume a Maybe Real.
Solution: let's have a special function to "connect"/"compose"/"link" functions. That way, we can, behind the scenes, adapt the output of one function to feed the following one.
In our case: g >>= f (connect/compose g to f). We want >>= to get g's output, inspect it and, in case it is Nothing just don't call f and return Nothing; or on the contrary, extract the boxed Real and feed f with it. (This algorithm is just the implementation of >>= for the Maybe type). Also note that >>= must be written only once per "boxing type" (different box, different adapting algorithm).
Many other problems arise which can be solved using this same pattern: 1. Use a "box" to codify/store different meanings/values, and have functions like g that return those "boxed values". 2. Have a composer/linker g >>= f to help connecting g's output to f's input, so we don't have to change any f at all.
Remarkable problems that can be solved using this technique are:
having a global state that every function in the sequence of functions ("the program") can share: solution StateMonad.
We don't like "impure functions": functions that yield different output for same input. Therefore, let's mark those functions, making them to return a tagged/boxed value: IO monad.
Total happiness!
The answer is, of course, "We don't". As with all abstractions, it isn't necessary.
Haskell does not need a monad abstraction. It isn't necessary for performing IO in a pure language. The IO type takes care of that just fine by itself. The existing monadic desugaring of do blocks could be replaced with desugaring to bindIO, returnIO, and failIO as defined in the GHC.Base module. (It's not a documented module on hackage, so I'll have to point at its source for documentation.) So no, there's no need for the monad abstraction.
So if it's not needed, why does it exist? Because it was found that many patterns of computation form monadic structures. Abstraction of a structure allows for writing code that works across all instances of that structure. To put it more concisely - code reuse.
In functional languages, the most powerful tool found for code reuse has been composition of functions. The good old (.) :: (b -> c) -> (a -> b) -> (a -> c) operator is exceedingly powerful. It makes it easy to write tiny functions and glue them together with minimal syntactic or semantic overhead.
But there are cases when the types don't work out quite right. What do you do when you have foo :: (b -> Maybe c) and bar :: (a -> Maybe b)? foo . bar doesn't typecheck, because b and Maybe b aren't the same type.
But... it's almost right. You just want a bit of leeway. You want to be able to treat Maybe b as if it were basically b. It's a poor idea to just flat-out treat them as the same type, though. That's more or less the same thing as null pointers, which Tony Hoare famously called the billion-dollar mistake. So if you can't treat them as the same type, maybe you can find a way to extend the composition mechanism (.) provides.
In that case, it's important to really examine the theory underlying (.). Fortunately, someone has already done this for us. It turns out that the combination of (.) and id form a mathematical construct known as a category. But there are other ways to form categories. A Kleisli category, for instance, allows the objects being composed to be augmented a bit. A Kleisli category for Maybe would consist of (.) :: (b -> Maybe c) -> (a -> Maybe b) -> (a -> Maybe c) and id :: a -> Maybe a. That is, the objects in the category augment the (->) with a Maybe, so (a -> b) becomes (a -> Maybe b).
And suddenly, we've extended the power of composition to things that the traditional (.) operation doesn't work on. This is a source of new abstraction power. Kleisli categories work with more types than just Maybe. They work with every type that can assemble a proper category, obeying the category laws.
Left identity: id . f = f
Right identity: f . id = f
Associativity: f . (g . h) = (f . g) . h
As long as you can prove that your type obeys those three laws, you can turn it into a Kleisli category. And what's the big deal about that? Well, it turns out that monads are exactly the same thing as Kleisli categories. Monad's return is the same as Kleisli id. Monad's (>>=) isn't identical to Kleisli (.), but it turns out to be very easy to write each in terms of the other. And the category laws are the same as the monad laws, when you translate them across the difference between (>>=) and (.).
So why go through all this bother? Why have a Monad abstraction in the language? As I alluded to above, it enables code reuse. It even enables code reuse along two different dimensions.
The first dimension of code reuse comes directly from the presence of the abstraction. You can write code that works across all instances of the abstraction. There's the entire monad-loops package consisting of loops that work with any instance of Monad.
The second dimension is indirect, but it follows from the existence of composition. When composition is easy, it's natural to write code in small, reusable chunks. This is the same way having the (.) operator for functions encourages writing small, reusable functions.
So why does the abstraction exist? Because it's proven to be a tool that enables more composition in code, resulting in creating reusable code and encouraging the creation of more reusable code. Code reuse is one of the holy grails of programming. The monad abstraction exists because it moves us a little bit towards that holy grail.
Benjamin Pierce said in TAPL
A type system can be regarded as calculating a kind of static
approximation to the run-time behaviours of the terms in a program.
That's why a language equipped with a powerful type system is strictly more expressive, than a poorly typed language. You can think about monads in the same way.
As #Carl and sigfpe point, you can equip a datatype with all operations you want without resorting to monads, typeclasses or whatever other abstract stuff. However monads allow you not only to write reusable code, but also to abstract away all redundant detailes.
As an example, let's say we want to filter a list. The simplest way is to use the filter function: filter (> 3) [1..10], which equals [4,5,6,7,8,9,10].
A slightly more complicated version of filter, that also passes an accumulator from left to right, is
swap (x, y) = (y, x)
(.*) = (.) . (.)
filterAccum :: (a -> b -> (Bool, a)) -> a -> [b] -> [b]
filterAccum f a xs = [x | (x, True) <- zip xs $ snd $ mapAccumL (swap .* f) a xs]
To get all i, such that i <= 10, sum [1..i] > 4, sum [1..i] < 25, we can write
filterAccum (\a x -> let a' = a + x in (a' > 4 && a' < 25, a')) 0 [1..10]
which equals [3,4,5,6].
Or we can redefine the nub function, that removes duplicate elements from a list, in terms of filterAccum:
nub' = filterAccum (\a x -> (x `notElem` a, x:a)) []
nub' [1,2,4,5,4,3,1,8,9,4] equals [1,2,4,5,3,8,9]. A list is passed as an accumulator here. The code works, because it's possible to leave the list monad, so the whole computation stays pure (notElem doesn't use >>= actually, but it could). However it's not possible to safely leave the IO monad (i.e. you cannot execute an IO action and return a pure value — the value always will be wrapped in the IO monad). Another example is mutable arrays: after you have leaved the ST monad, where a mutable array live, you cannot update the array in constant time anymore. So we need a monadic filtering from the Control.Monad module:
filterM :: (Monad m) => (a -> m Bool) -> [a] -> m [a]
filterM _ [] = return []
filterM p (x:xs) = do
flg <- p x
ys <- filterM p xs
return (if flg then x:ys else ys)
filterM executes a monadic action for all elements from a list, yielding elements, for which the monadic action returns True.
A filtering example with an array:
nub' xs = runST $ do
arr <- newArray (1, 9) True :: ST s (STUArray s Int Bool)
let p i = readArray arr i <* writeArray arr i False
filterM p xs
main = print $ nub' [1,2,4,5,4,3,1,8,9,4]
prints [1,2,4,5,3,8,9] as expected.
And a version with the IO monad, which asks what elements to return:
main = filterM p [1,2,4,5] >>= print where
p i = putStrLn ("return " ++ show i ++ "?") *> readLn
E.g.
return 1? -- output
True -- input
return 2?
False
return 4?
False
return 5?
True
[1,5] -- output
And as a final illustration, filterAccum can be defined in terms of filterM:
filterAccum f a xs = evalState (filterM (state . flip f) xs) a
with the StateT monad, that is used under the hood, being just an ordinary datatype.
This example illustrates, that monads not only allow you to abstract computational context and write clean reusable code (due to the composability of monads, as #Carl explains), but also to treat user-defined datatypes and built-in primitives uniformly.
I don't think IO should be seen as a particularly outstanding monad, but it's certainly one of the more astounding ones for beginners, so I'll use it for my explanation.
Naïvely building an IO system for Haskell
The simplest conceivable IO system for a purely-functional language (and in fact the one Haskell started out with) is this:
main₀ :: String -> String
main₀ _ = "Hello World"
With lazyness, that simple signature is enough to actually build interactive terminal programs – very limited, though. Most frustrating is that we can only output text. What if we added some more exciting output possibilities?
data Output = TxtOutput String
| Beep Frequency
main₁ :: String -> [Output]
main₁ _ = [ TxtOutput "Hello World"
-- , Beep 440 -- for debugging
]
cute, but of course a much more realistic “alterative output” would be writing to a file. But then you'd also want some way to read from files. Any chance?
Well, when we take our main₁ program and simply pipe a file to the process (using operating system facilities), we have essentially implemented file-reading. If we could trigger that file-reading from within the Haskell language...
readFile :: Filepath -> (String -> [Output]) -> [Output]
This would use an “interactive program” String->[Output], feed it a string obtained from a file, and yield a non-interactive program that simply executes the given one.
There's one problem here: we don't really have a notion of when the file is read. The [Output] list sure gives a nice order to the outputs, but we don't get an order for when the inputs will be done.
Solution: make input-events also items in the list of things to do.
data IO₀ = TxtOut String
| TxtIn (String -> [Output])
| FileWrite FilePath String
| FileRead FilePath (String -> [Output])
| Beep Double
main₂ :: String -> [IO₀]
main₂ _ = [ FileRead "/dev/null" $ \_ ->
[TxtOutput "Hello World"]
]
Ok, now you may spot an imbalance: you can read a file and make output dependent on it, but you can't use the file contents to decide to e.g. also read another file. Obvious solution: make the result of the input-events also something of type IO, not just Output. That sure includes simple text output, but also allows reading additional files etc..
data IO₁ = TxtOut String
| TxtIn (String -> [IO₁])
| FileWrite FilePath String
| FileRead FilePath (String -> [IO₁])
| Beep Double
main₃ :: String -> [IO₁]
main₃ _ = [ TxtIn $ \_ ->
[TxtOut "Hello World"]
]
That would now actually allow you to express any file operation you might want in a program (though perhaps not with good performance), but it's somewhat overcomplicated:
main₃ yields a whole list of actions. Why don't we simply use the signature :: IO₁, which has this as a special case?
The lists don't really give a reliable overview of program flow anymore: most subsequent computations will only be “announced” as the result of some input operation. So we might as well ditch the list structure, and simply cons a “and then do” to each output operation.
data IO₂ = TxtOut String IO₂
| TxtIn (String -> IO₂)
| Terminate
main₄ :: IO₂
main₄ = TxtIn $ \_ ->
TxtOut "Hello World"
Terminate
Not too bad!
So what has all of this to do with monads?
In practice, you wouldn't want to use plain constructors to define all your programs. There would need to be a good couple of such fundamental constructors, yet for most higher-level stuff we would like to write a function with some nice high-level signature. It turns out most of these would look quite similar: accept some kind of meaningfully-typed value, and yield an IO action as the result.
getTime :: (UTCTime -> IO₂) -> IO₂
randomRIO :: Random r => (r,r) -> (r -> IO₂) -> IO₂
findFile :: RegEx -> (Maybe FilePath -> IO₂) -> IO₂
There's evidently a pattern here, and we'd better write it as
type IO₃ a = (a -> IO₂) -> IO₂ -- If this reminds you of continuation-passing
-- style, you're right.
getTime :: IO₃ UTCTime
randomRIO :: Random r => (r,r) -> IO₃ r
findFile :: RegEx -> IO₃ (Maybe FilePath)
Now that starts to look familiar, but we're still only dealing with thinly-disguised plain functions under the hood, and that's risky: each “value-action” has the responsibility of actually passing on the resulting action of any contained function (else the control flow of the entire program is easily disrupted by one ill-behaved action in the middle). We'd better make that requirement explicit. Well, it turns out those are the monad laws, though I'm not sure we can really formulate them without the standard bind/join operators.
At any rate, we've now reached a formulation of IO that has a proper monad instance:
data IO₄ a = TxtOut String (IO₄ a)
| TxtIn (String -> IO₄ a)
| TerminateWith a
txtOut :: String -> IO₄ ()
txtOut s = TxtOut s $ TerminateWith ()
txtIn :: IO₄ String
txtIn = TxtIn $ TerminateWith
instance Functor IO₄ where
fmap f (TerminateWith a) = TerminateWith $ f a
fmap f (TxtIn g) = TxtIn $ fmap f . g
fmap f (TxtOut s c) = TxtOut s $ fmap f c
instance Applicative IO₄ where
pure = TerminateWith
(<*>) = ap
instance Monad IO₄ where
TerminateWith x >>= f = f x
TxtOut s c >>= f = TxtOut s $ c >>= f
TxtIn g >>= f = TxtIn $ (>>=f) . g
Obviously this is not an efficient implementation of IO, but it's in principle usable.
Monads serve basically to compose functions together in a chain. Period.
Now the way they compose differs across the existing monads, thus resulting in different behaviors (e.g., to simulate mutable state in the state monad).
The confusion about monads is that being so general, i.e., a mechanism to compose functions, they can be used for many things, thus leading people to believe that monads are about state, about IO, etc, when they are only about "composing functions".
Now, one interesting thing about monads, is that the result of the composition is always of type "M a", that is, a value inside an envelope tagged with "M". This feature happens to be really nice to implement, for example, a clear separation between pure from impure code: declare all impure actions as functions of type "IO a" and provide no function, when defining the IO monad, to take out the "a" value from inside the "IO a". The result is that no function can be pure and at the same time take out a value from an "IO a", because there is no way to take such value while staying pure (the function must be inside the "IO" monad to use such value). (NOTE: well, nothing is perfect, so the "IO straitjacket" can be broken using "unsafePerformIO : IO a -> a" thus polluting what was supposed to be a pure function, but this should be used very sparingly and when you really know to be not introducing any impure code with side-effects.
Monads are just a convenient framework for solving a class of recurring problems. First, monads must be functors (i.e. must support mapping without looking at the elements (or their type)), they must also bring a binding (or chaining) operation and a way to create a monadic value from an element type (return). Finally, bind and return must satisfy two equations (left and right identities), also called the monad laws. (Alternatively one could define monads to have a flattening operation instead of binding.)
The list monad is commonly used to deal with non-determinism. The bind operation selects one element of the list (intuitively all of them in parallel worlds), lets the programmer to do some computation with them, and then combines the results in all worlds to single list (by concatenating, or flattening, a nested list). Here is how one would define a permutation function in the monadic framework of Haskell:
perm [e] = [[e]]
perm l = do (leader, index) <- zip l [0 :: Int ..]
let shortened = take index l ++ drop (index + 1) l
trailer <- perm shortened
return (leader : trailer)
Here is an example repl session:
*Main> perm "a"
["a"]
*Main> perm "ab"
["ab","ba"]
*Main> perm ""
[]
*Main> perm "abc"
["abc","acb","bac","bca","cab","cba"]
It should be noted that the list monad is in no way a side effecting computation. A mathematical structure being a monad (i.e. conforming to the above mentioned interfaces and laws) does not imply side effects, though side-effecting phenomena often nicely fit into the monadic framework.
You need monads if you have a type constructor and functions that returns values of that type family. Eventually, you would like to combine these kind of functions together. These are the three key elements to answer why.
Let me elaborate. You have Int, String and Real and functions of type Int -> String, String -> Real and so on. You can combine these functions easily, ending with Int -> Real. Life is good.
Then, one day, you need to create a new family of types. It could be because you need to consider the possibility of returning no value (Maybe), returning an error (Either), multiple results (List) and so on.
Notice that Maybe is a type constructor. It takes a type, like Int and returns a new type Maybe Int. First thing to remember, no type constructor, no monad.
Of course, you want to use your type constructor in your code, and soon you end with functions like Int -> Maybe String and String -> Maybe Float. Now, you can't easily combine your functions. Life is not good anymore.
And here's when monads come to the rescue. They allow you to combine that kind of functions again. You just need to change the composition . for >==.
Why do we need monadic types?
Since it was the quandary of I/O and its observable effects in nonstrict languages like Haskell that brought the monadic interface to such prominence:
[...] monads are used to address the more general problem of computations (involving state, input/output, backtracking, ...) returning values: they do not solve any input/output-problems directly but rather provide an elegant and flexible abstraction of many solutions to related problems. [...] For instance, no less than three different input/output-schemes are used to solve these basic problems in Imperative functional programming, the paper which originally proposed `a new model, based on monads, for performing input/output in a non-strict, purely functional language'. [...]
[Such] input/output-schemes merely provide frameworks in which side-effecting operations can safely be used with a guaranteed order of execution and without affecting the properties of the purely functional parts of the language.
Claus Reinke (pages 96-97 of 210).
(emphasis by me.)
[...] When we write effectful code – monads or no monads – we have to constantly keep in mind the context of expressions we pass around.
The fact that monadic code ‘desugars’ (is implementable in terms of) side-effect-free code is irrelevant. When we use monadic notation, we program within that notation – without considering what this notation desugars into. Thinking of the desugared code breaks the monadic abstraction. A side-effect-free, applicative code is normally compiled to (that is, desugars into) C or machine code. If the desugaring argument has any force, it may be applied just as well to the applicative code, leading to the conclusion that it all boils down to the machine code and hence all programming is imperative.
[...] From the personal experience, I have noticed that the mistakes I make when writing monadic code are exactly the mistakes I made when programming in C. Actually, monadic mistakes tend to be worse, because monadic notation (compared to that of a typical imperative language) is ungainly and obscuring.
Oleg Kiselyov (page 21 of 26).
The most difficult construct for students to understand is the monad. I introduce IO without mentioning monads.
Olaf Chitil.
More generally:
Still, today, over 25 years after the introduction of the concept of monads to the world of functional programming, beginning functional programmers struggle to grasp the concept of monads. This struggle is exemplified by the numerous blog posts about the effort of trying to learn about monads. From our own experience we notice that even at university level, bachelor level students often struggle to comprehend monads and consistently score poorly on monad-related exam questions.
Considering that the concept of monads is not likely to disappear from the functional programming landscape any time soon, it is vital that we, as the functional programming community, somehow overcome the problems novices encounter when first studying monads.
Tim Steenvoorden, Jurriën Stutterheim, Erik Barendsen and Rinus Plasmeijer.
If only there was another way to specify "a guaranteed order of execution" in Haskell, while keeping the ability to separate regular Haskell definitions from those involved in I/O (and its observable effects) - translating this variation of Philip Wadler's echo:
val echoML : unit -> unit
fun echoML () = let val c = getcML () in
if c = #"\n" then
()
else
let val _ = putcML c in
echoML ()
end
fun putcML c = TextIO.output1(TextIO.stdOut,c);
fun getcML () = valOf(TextIO.input1(TextIO.stdIn));
...could then be as simple as:
echo :: OI -> ()
echo u = let !(u1:u2:u3:_) = partsOI u in
let !c = getChar u1 in
if c == '\n' then
()
else
let !_ = putChar c u2 in
echo u3
where:
data OI -- abstract
foreign import ccall "primPartOI" partOI :: OI -> (OI, OI)
⋮
foreign import ccall "primGetCharOI" getChar :: OI -> Char
foreign import ccall "primPutCharOI" putChar :: Char -> OI -> ()
⋮
and:
partsOI :: OI -> [OI]
partsOI u = let !(u1, u2) = partOI u in u1 : partsOI u2
How would this work? At run-time, Main.main receives an initial OI pseudo-data value as an argument:
module Main(main) where
main :: OI -> ()
⋮
...from which other OI values are produced, using partOI or partsOI. All you have to do is ensure each new OI value is used at most once, in each call to an OI-based definition, foreign or otherwise. In return, you get back a plain ordinary result - it isn't e.g. paired with some odd abstract state, or requires the use of a callback continuation, etc.
Using OI, instead of the unit type () like Standard ML does, means we can avoid always having to use the monadic interface:
Once you're in the IO monad, you're stuck there forever, and are reduced to Algol-style imperative programming.
Robert Harper.
But if you really do need it:
type IO a = OI -> a
unitIO :: a -> IO a
unitIO x = \ u -> let !_ = partOI u in x
bindIO :: IO a -> (a -> IO b) -> IO b
bindIO m k = \ u -> let !(u1, u2) = partOI u in
let !x = m u1 in
let !y = k x u2 in
y
⋮
So, monadic types aren't always needed - there are other interfaces out there:
LML had a fully fledged implementation of oracles running of a multi-processor (a Sequent Symmetry) back in ca 1989. The description in the Fudgets thesis refers to this implementation. It was fairly pleasant to work with and quite practical.
[...]
These days everything is done with monads so other solutions are sometimes forgotten.
Lennart Augustsson (2006).
Wait a moment: since it so closely resembles Standard ML's direct use of effects, is this approach and its use of pseudo-data referentially transparent?
Absolutely - just find a suitable definition of "referential transparency"; there's plenty to choose from...
Okay, so I am not a Haskell programmer, but I am absolutely intrigued by a lot of the ideas behind Haskell and am looking into learning it. But I'm stuck at square one: I can't seem to wrap my head around Monads, which seem to be fairly fundamental. I know there are a million questions on SO asking to explain Monads, so I'm going to be a little more specific about what's bugging me:
I read this excellent article (an introduction in Javascript), and thought that I understood Monads completely. Then I read the Wikipedia entry on Monads, and saw this:
A binding operation of polymorphic type (M t)→(t→M u)→(M u), which Haskell represents by the infix operator >>=. Its first argument is a value in a monadic type, its second argument is a function that maps from the underlying type of the first argument to another monadic type, and its result is in that other monadic type.
Okay, in the article that I cited, bind was a function which took only one argument. Wikipedia says two. What I thought I understood about Monads was the following:
A Monad's purpose is to take a function with different input and output types and to make it composable. It does this by wrapping the input and output types with a single monadic type.
A Monad consists of two interrelated functions: bind and unit. Bind takes a non-composable function f and returns a new function g that accepts the monadic type as input and returns the monadic type. g is composable. The unit function takes an argument of the type that f expected, and wraps it in the monadic type. This can then be passed to g, or to any composition of functions like g.
But there must be something wrong, because my concept of bind takes one argument: a function. But (according to Wikipedia) Haskell's bind actually takes two arguments! Where is my mistake?
You are not making a mistake. The key idea to understand here is currying - that a Haskell function of two arguments can be seen in two ways. The first is as simply a function of two arguments. If you have, for example, (+), this is usually seen as taking two arguments and adding them. The other way to see it is as a addition machine producer. (+) is a function that takes a number, say x, and makes a function that will add x.
(+) x = \y -> x + y
(+) x y = (\y -> x + y) y = x + y
When dealing with monads, sometimes it is probably better, as ephemient mentioned above, to think of =<<, the flipped version of >>=. There are two ways to look at this:
(=<<) :: (a -> m b) -> m a -> m b
which is a function of two arguments, and
(=<<) :: (a -> m b) -> (m a -> m b)
which transforms the input function to an easily composed version as the article mentioned. These are equivalent just like (+) as I explained before.
Allow me to tear down your beliefs about Monads. I sincerely hope you realize that I am not trying to be rude; I'm simply trying to avoid mincing words.
A Monad's purpose is to take a function with different input and output types and to make it composable. It does this by wrapping the input and output types with a single monadic type.
Not exactly. When you start a sentence with "A Monad's purpose", you're already on the wrong foot. Monads don't necessarily have a "purpose". Monad is simply an abstraction, a classification which applies to certain types and not to others. The purpose of the Monad abstraction is simply that, abstraction.
A Monad consists of two interrelated functions: bind and unit.
Yes and no. The combination of bind and unit are sufficient to define a Monad, but the combination of join, fmap, and unit is equally sufficient. The latter is, in fact, the way that Monads are typically described in Category Theory.
Bind takes a non-composable function f and returns a new function g that accepts the monadic type as input and returns the monadic type.
Again, not exactly. A monadic function f :: a -> m b is perfectly composable, with certain types. I can post-compose it with a function g :: m b -> c to get g . f :: a -> c, or I can pre-compose it with a function h :: c -> a to get f . h :: c -> m b.
But you got the second part absolutely right: (>>= f) :: m a -> m b. As others have noted, Haskell's bind function takes the arguments in the opposite order.
g is composable.
Well, yes. If g :: m a -> m b, then you can pre-compose it with a function f :: c -> m a to get g . f :: c -> m b, or you can post-compose it with a function h :: m b -> c to get h . g :: m a -> c. Note that c could be of the form m v where m is a Monad. I suppose when you say "composable" you mean to say "you can compose arbitrarily long chains of functions of this form", which is sort of true.
The unit function takes an argument of the type that f expected, and wraps it in the monadic type.
A roundabout way of saying it, but yes, that's about right.
This [the result of applying unit to some value] can then be passed to g, or to any composition of functions like g.
Again, yes. Although it is generally not idiomatic Haskell to call unit (or in Haskell, return) and then pass that to (>>= f).
-- instead of
return x >>= f >>= g
-- simply go with
f x >>= g
-- instead of
\x -> return x >>= f >>= g
-- simply go with
f >=> g
-- or
g <=< f
The article you link is based on sigfpe's article, which uses a flipped definition of bind:
The first thing is that I've flipped the definition of bind and written it as the word 'bind' whereas it's normally written as the operator >>=. So bind f x is normally written as x >>= f.
So, the Haskell bind takes a value enclosed in a monad, and returns a function, which takes a function and then calls it with the extracted value. I might be using non-precise terminology, so maybe better with code.
You have:
sine x = (sin x, "sine was called.")
cube x = (x * x * x, "cube was called.")
Now, translating your JS bind (Haskell does automatic currying, so calling bind f returns a function that takes a tuple, and then pattern matching takes care of unpacking it into x and s, I hope that's understandable):
bind f (x, s) = (y, s ++ t)
where (y, t) = f x
You can see it working:
*Main> :t sine
sine :: Floating t => t -> (t, [Char])
*Main> :t bind sine
bind sine :: Floating t1 => (t1, [Char]) -> (t1, [Char])
*Main> (bind sine . bind cube) (3, "")
(0.956375928404503,"cube was called.sine was called.")
Now, let's reverse arguments of bind:
bind' (x, s) f = (y, s ++ t)
where (y, t) = f x
You can clearly see it's still doing the same thing, but with a bit different syntax:
*Main> bind' (bind' (3, "") cube) sine
(0.956375928404503,"cube was called.sine was called.")
Now, Haskell has a syntax trick that allows you to use any function as an infix operator. So you can write:
*Main> (3, "") `bind'` cube `bind'` sine
(0.956375928404503,"cube was called.sine was called.")
Now rename bind' to >>= ((3, "") >>= cube >>= sine) and you've got what you were looking for. As you can see, with this definition, you can effectively get rid of the separate composition operator.
Translating the new thing back into JavaScript would yield something like this (notice that again, I only reverse the argument order):
var bind = function(tuple) {
return function(f) {
var x = tuple[0],
s = tuple[1],
fx = f(x),
y = fx[0],
t = fx[1];
return [y, s + t];
};
};
// ugly, but it's JS, after all
var f = function(x) { return bind(bind(x)(cube))(sine); }
f([3, ""]); // [0.956375928404503, "cube was called.sine was called."]
Hope this helps, and not introduces more confusion — the point is that those two bind definitions are equivalent, only differing in call syntax.
I am learning Haskell and trying to understand Monads. I have two questions:
From what I understand, Monad is just another typeclass that declares ways to interact with data inside "containers", including Maybe, List, and IO. It seems clever and clean to implement these 3 things with one concept, but really, the point is so there can be clean error handling in a chain of functions, containers, and side effects. Is this a correct interpretation?
How exactly is the problem of side-effects solved? With this concept of containers, the language essentially says anything inside the containers is non-deterministic (such as i/o). Because lists and IOs are both containers, lists are equivalence-classed with IO, even though values inside lists seem pretty deterministic to me. So what is deterministic and what has side-effects? I can't wrap my head around the idea that a basic value is deterministic, until you stick it in a container (which is no special than the same value with some other values next to it, e.g. Nothing) and it can now be random.
Can someone explain how, intuitively, Haskell gets away with changing state with inputs and output? I'm not seeing the magic here.
The point is so there can be clean error handling in a chain of functions, containers, and side effects. Is this a correct interpretation?
Not really. You've mentioned a lot of concepts that people cite when trying to explain monads, including side effects, error handling and non-determinism, but it sounds like you've gotten the incorrect sense that all of these concepts apply to all monads. But there's one concept you mentioned that does: chaining.
There are two different flavors of this, so I'll explain it two different ways: one without side effects, and one with side effects.
No Side Effects:
Take the following example:
addM :: (Monad m, Num a) => m a -> m a -> m a
addM ma mb = do
a <- ma
b <- mb
return (a + b)
This function adds two numbers, with the twist that they are wrapped in some monad. Which monad? Doesn't matter! In all cases, that special do syntax de-sugars to the following:
addM ma mb =
ma >>= \a ->
mb >>= \b ->
return (a + b)
... or, with operator precedence made explicit:
ma >>= (\a -> mb >>= (\b -> return (a + b)))
Now you can really see that this is a chain of little functions, all composed together, and its behavior will depend on how >>= and return are defined for each monad. If you're familiar with polymorphism in object-oriented languages, this is essentially the same thing: one common interface with multiple implementations. It's slightly more mind-bending than your average OOP interface, since the interface represents a computation policy rather than, say, an animal or a shape or something.
Okay, let's see some examples of how addM behaves across different monads. The Identity monad is a decent place to start, since its definition is trivial:
instance Monad Identity where
return a = Identity a -- create an Identity value
(Identity a) >>= f = f a -- apply f to a
So what happens when we say:
addM (Identity 1) (Identity 2)
Expanding this, step by step:
(Identity 1) >>= (\a -> (Identity 2) >>= (\b -> return (a + b)))
(\a -> (Identity 2) >>= (\b -> return (a + b)) 1
(Identity 2) >>= (\b -> return (1 + b))
(\b -> return (1 + b)) 2
return (1 + 2)
Identity 3
Great. Now, since you mentioned clean error handling, let's look at the Maybe monad. Its definition is only slightly trickier than Identity:
instance Monad Maybe where
return a = Just a -- same as Identity monad!
(Just a) >>= f = f a -- same as Identity monad again!
Nothing >>= _ = Nothing -- the only real difference from Identity
So you can imagine that if we say addM (Just 1) (Just 2) we'll get Just 3. But for grins, let's expand addM Nothing (Just 1) instead:
Nothing >>= (\a -> (Just 1) >>= (\b -> return (a + b)))
Nothing
Or the other way around, addM (Just 1) Nothing:
(Just 1) >>= (\a -> Nothing >>= (\b -> return (a + b)))
(\a -> Nothing >>= (\b -> return (a + b)) 1
Nothing >>= (\b -> return (1 + b))
Nothing
So the Maybe monad's definition of >>= was tweaked to account for failure. When a function is applied to a Maybe value using >>=, you get what you'd expect.
Okay, so you mentioned non-determinism. Yes, the list monad can be thought of as modeling non-determinism in a sense... It's a little weird, but think of the list as representing alternative possible values: [1, 2, 3] is not a collection, it's a single non-deterministic number that could be either one, two or three. That sounds dumb, but it starts to make some sense when you think about how >>= is defined for lists: it applies the given function to each possible value. So addM [1, 2] [3, 4] is actually going to compute all possible sums of those two non-deterministic values: [4, 5, 5, 6].
Okay, now to address your second question...
Side Effects:
Let's say you apply addM to two values in the IO monad, like:
addM (return 1 :: IO Int) (return 2 :: IO Int)
You don't get anything special, just 3 in the IO monad. addM does not read or write any mutable state, so it's kind of no fun. Same goes for the State or ST monads. No fun. So let's use a different function:
fireTheMissiles :: IO Int -- returns the number of casualties
Clearly the world will be different each time missiles are fired. Clearly. Now let's say you're trying to write some totally innocuous, side effect free, non-missile-firing code. Perhaps you're trying once again to add two numbers, but this time without any monads flying around:
add :: Num a => a -> a -> a
add a b = a + b
and all of a sudden your hand slips, and you accidentally typo:
add a b = a + b + fireTheMissiles
An honest mistake, really. The keys were so close together. Fortunately, because fireTheMissiles was of type IO Int rather than simply Int, the compiler is able to avert disaster.
Okay, totally contrived example, but the point is that in the case of IO, ST and friends, the type system keeps effects isolated to some specific context. It doesn't magically eliminate side effects, making code referentially transparent that shouldn't be, but it does make it clear at compile time what scope the effects are limited to.
So getting back to the original point: what does this have to do with chaining or composition of functions? Well, in this case, it's just a handy way of expressing a sequence of effects:
fireTheMissilesTwice :: IO ()
fireTheMissilesTwice = do
a <- fireTheMissiles
print a
b <- fireTheMissiles
print b
Summary:
A monad represents some policy for chaining computations. Identity's policy is pure function composition, Maybe's policy is function composition with failure propogation, IO's policy is impure function composition and so on.
Let me start by pointing at the excellent "You could have invented monads" article. It illustrates how the Monad structure can naturally manifest while you are writing programs. But the tutorial doesn't mention IO, so I will have a stab here at extending the approach.
Let us start with what you probably have already seen - the container monad. Let's say we have:
f, g :: Int -> [Int]
One way of looking at this is that it gives us a number of possible outputs for every possible input. What if we want all possible outputs for the composition of both functions? Giving all possibilities we could get by applying the functions one after the other?
Well, there's a function for that:
fg x = concatMap g $ f x
If we put this more general, we get
fg x = f x >>= g
xs >>= f = concatMap f xs
return x = [x]
Why would we want to wrap it like this? Well, writing our programs primarily using >>= and return gives us some nice properties - for example, we can be sure that it's relatively hard to "forget" solutions. We'd explicitly have to reintroduce it, say by adding another function skip. And also we now have a monad and can use all combinators from the monad library!
Now, let us jump to your trickier example. Let's say the two functions are "side-effecting". That's not non-deterministic, it just means that in theory the whole world is both their input (as it can influence them) as well as their output (as the function can influence it). So we get something like:
f, g :: Int -> RealWorld# -> (Int, RealWorld#)
If we now want f to get the world that g left behind, we'd write:
fg x rw = let (y, rw') = f x rw
(r, rw'') = g y rw'
in (r, rw'')
Or generalized:
fg x = f x >>= g
x >>= f = \rw -> let (y, rw') = x rw
(r, rw'') = f y rw'
in (r, rw'')
return x = \rw -> (x, rw)
Now if the user can only use >>=, return and a few pre-defined IO values we get a nice property again: The user will never actually see the RealWorld# getting passed around! And that is a very good thing, as you aren't really interested in the details of where getLine gets its data from. And again we get all the nice high-level functions from the monad libraries.
So the important things to take away:
The monad captures common patterns in your code, like "always pass all elements of container A to container B" or "pass this real-world-tag through". Often, once you realize that there is a monad in your program, complicated things become simply applications of the right monad combinator.
The monad allows you to completely hide the implementation from the user. It is an excellent encapsulation mechanism, be it for your own internal state or for how IO manages to squeeze non-purity into a pure program in a relatively safe way.
Appendix
In case someone is still scratching his head over RealWorld# as much as I did when I started: There's obviously more magic going on after all the monad abstraction has been removed. Then the compiler will make use of the fact that there can only ever be one "real world". That's good news and bad news:
It follows that the compiler must guarantuee execution ordering between functions (which is what we were after!)
But it also means that actually passing the real world isn't necessary as there is only one we could possibly mean: The one that is current when the function gets executed!
Bottom line is that once execution order is fixed, RealWorld# simply gets optimized out. Therefore programs using the IO monad actually have zero runtime overhead. Also note that using RealWorld# is obviously only one possible way to put IO - but it happens to be the one GHC uses internally. The good thing about monads is that, again, the user really doesn't need to know.
You could see a given monad m as a set/family (or realm, domain, etc.) of actions (think of a C statement). The monad m defines the kind of (side-)effects that its actions may have:
with [] you can define actions which can fork their executions in different "independent parallel worlds";
with Either Foo you can define actions which can fail with errors of type Foo;
with IO you can define actions which can have side-effects on the "outside world" (access files, network, launch processes, do a HTTP GET ...);
you can have a monad whose effect is "randomness" (see package MonadRandom);
you can define a monad whose actions can make a move in a game (say chess, Go…) and receive move from an opponent but are not able to write to your filesystem or anything else.
Summary
If m is a monad, m a is an action which produces a result/output of type a.
The >> and >>= operators are used to create more complex actions out of simpler ones:
a >> b is a macro-action which does action a and then action b;
a >> a does action a and then action a again;
with >>= the second action can depend on the output of the first one.
The exact meaning of what an action is and what doing an action and then another one is depends on the monad: each monad defines an imperative sublanguage with some features/effects.
Simple sequencing (>>)
Let's say with have a given monad M and some actions incrementCounter, decrementCounter, readCounter:
instance M Monad where ...
-- Modify the counter and do not produce any result:
incrementCounter :: M ()
decrementCounter :: M ()
-- Get the current value of the counter
readCounter :: M Integer
Now we would like to do something interesting with those actions. The first thing we would like to do with those actions is to sequence them. As in say C, we would like to be able to do:
// This is C:
counter++;
counter++;
We define an "sequencing operator" >>. Using this operator we can write:
incrementCounter >> incrementCounter
What is the type of "incrementCounter >> incrementCounter"?
It is an action made of two smaller actions like in C you can write composed-statements from atomic statements :
// This is a macro statement made of several statements
{
counter++;
counter++;
}
// and we can use it anywhere we may use a statement:
if (condition) {
counter++;
counter++;
}
it can have the same kind of effects as its subactions;
it does not produce any output/result.
So we would like incrementCounter >> incrementCounter to be of type M (): an (macro-)action with the same kind of possible effects but without any output.
More generally, given two actions:
action1 :: M a
action2 :: M b
we define a a >> b as the macro-action which is obtained by doing (whatever that means in our domain of action) a then b and produces as output the result of the execution of the second action. The type of >> is:
(>>) :: M a -> M b -> M b
or more generally:
(>>) :: (Monad m) => m a -> m b -> m b
We can define bigger sequence of actions from simpler ones:
action1 >> action2 >> action3 >> action4
Input and outputs (>>=)
We would like to be able to increment by something else that 1 at a time:
incrementBy 5
We want to provide some input in our actions, in order to do this we define a function incrementBy taking an Int and producing an action:
incrementBy :: Int -> M ()
Now we can write things like:
incrementCounter >> readCounter >> incrementBy 5
But we have no way to feed the output of readCounter into incrementBy. In order to do this, a slightly more powerful version of our sequencing operator is needed. The >>= operator can feed the output of a given action as input to the next action. We can write:
readCounter >>= incrementBy
It is an action which executes the readCounter action, feeds its output in the incrementBy function and then execute the resulting action.
The type of >>= is:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
A (partial) example
Let's say I have a Prompt monad which can only display informations (text) to the user and ask informations to the user:
-- We don't have access to the internal structure of the Prompt monad
module Prompt (Prompt(), echo, prompt) where
-- Opaque
data Prompt a = ...
instance Monad Prompt where ...
-- Display a line to the CLI:
echo :: String -> Prompt ()
-- Ask a question to the user:
prompt :: String -> Prompt String
Let's try to define a promptBoolean message actions which asks for a question and produces a boolean value.
We use the prompt (message ++ "[y/n]") action and feed its output to a function f:
f "y" should be an action which does nothing but produce True as output;
f "n" should be an action which does nothing but produce False as output;
anything else should restart the action (do the action again);
promptBoolean would look like this:
-- Incomplete version, some bits are missing:
promptBoolean :: String -> M Boolean
promptBoolean message = prompt (message ++ "[y/n]") >>= f
where f result = if result == "y"
then ???? -- We need here an action which does nothing but produce `True` as output
else if result=="n"
then ???? -- We need here an action which does nothing but produce `False` as output
else echo "Input not recognised, try again." >> promptBoolean
Producing a value without effect (return)
In order to fill the missing bits in our promptBoolean function, we need a way to represent dummy actions without any side effect but which only outputs a given value:
-- "return 5" is an action which does nothing but outputs 5
return :: (Monad m) => a -> m a
and we can now write out promptBoolean function:
promptBoolean :: String -> Prompt Boolean
promptBoolean message :: prompt (message ++ "[y/n]") >>= f
where f result = if result=="y"
then return True
else if result=="n"
then return False
else echo "Input not recognised, try again." >> promptBoolean message
By composing those two simple actions (promptBoolean, echo) we can define any kind of dialogue between the user and your program (the actions of the program are deterministic as our monad does not have a "randomness effect").
promptInt :: String -> M Int
promptInt = ... -- similar
-- Classic "guess a number game/dialogue"
guess :: Int -> m()
guess n = promptInt "Guess:" m -> f
where f m = if m == n
then echo "Found"
else (if m > n
then echo "Too big"
then echo "Too small") >> guess n
The operations of a monad
A Monad is a set of actions which can be composed with the return and >>= operators:
>>= for action composition;
return for producing a value without any (side-)effect.
These two operators are the minimal operators needed to define a Monad.
In Haskell, the >> operator is needed as well but it can in fact be derived from >>=:
(>>): Monad m => m a -> m b -> m b
a >> b = a >>= f
where f x = b
In Haskell, an extra fail operator is need as well but this is really a hack (and it might be removed from Monad in the future).
This is the Haskell definition of a Monad:
class Monad m where
return :: m a
(>>=) :: m a -> (a -> m b) -> m b
(>>) :: m a -> m b -> m b -- can be derived from (>>=)
fail :: String -> m a -- mostly a hack
Actions are first-class
One great thing about monads is that actions are first-class. You can take them in a variable, you can define function which take actions as input and produce some other actions as output. For example, we can define a while operator:
-- while x y : does action y while action x output True
while :: (Monad m) => m Boolean -> m a -> m ()
while x y = x >>= f
where f True = y >> while x y
f False = return ()
Summary
A Monad is a set of actions in some domain. The monad/domain define the kind of "effects" which are possible. The >> and >>= operators represent sequencing of actions and monadic expression may be used to represent any kind of "imperative (sub)program" in your (functional) Haskell program.
The great things are that:
you can design your own Monad which supports the features and effects that you want
see Prompt for an example of a "dialogue only subprogram",
see Rand for an example of "sampling only subprogram";
you can write your own control structures (while, throw, catch or more exotic ones) as functions taking actions and composing them in some way to produce a bigger macro-actions.
MonadRandom
A good way of understanding monads, is the MonadRandom package. The Rand monad is made of actions whose output can be random (the effect is randomness). An action in this monad is some kind of random variable (or more exactly a sampling process):
-- Sample an Int from some distribution
action :: Rand Int
Using Rand to do some sampling/random algorithms is quite interesting because you have random variables as first class values:
-- Estimate mean by sampling nsamples times the random variable x
sampleMean :: Real a => Int -> m a -> m a
sampleMean n x = ...
In this setting, the sequence function from Prelude,
sequence :: Monad m => [m a] -> m [a]
becomes
sequence :: [Rand a] -> Rand [a]
It creates a random variable obtained by sampling independently from a list of random variables.
There are three main observations concerning the IO monad:
1) You can't get values out of it. Other types like Maybe might allow to extract values, but neither the monad class interface itself nor the IO data type allow it.
2) "Inside" IO is not only the real value but also that "RealWorld" thing. This dummy value is used to enforce the chaining of actions by the type system: If you have two independent calculations, the use of >>= makes the second calculation dependent on the first.
3) Assume a non-deterministic thing like random :: () -> Int, which isn't allowed in Haskell. If you change the signature to random :: Blubb -> (Blubb, Int), it is allowed, if you make sure that nobody ever can use a Blubb twice: Because in that case all inputs are "different", it is no problem that the outputs are different as well.
Now we can use the fact 1): Nobody can get something out of IO, so we can use the RealWord dummy hidden in IO to serve as a Blubb. There is only one IOin the whole application (the one we get from main), and it takes care of proper sequentiation, as we have seen in 2). Problem solved.
One thing that often helps me to understand the nature of something is to examine it in the most trivial way possible. That way, I'm not getting distracted by potentially unrelated concepts. With that in mind, I think it may be helpful to understand the nature of the Identity Monad, as it's the most trivial implementation of a Monad possible (I think).
What is interesting about the Identity Monad? I think it is that it allows me to express the idea of evaluating expressions in a context defined by other expressions. And to me, that is the essence of every Monad I've encountered (so far).
If you already had a lot of exposure to 'mainstream' programming languages before learning Haskell (like I did), then this doesn't seem very interesting at all. After all, in a mainstream programming language, statements are executed in sequence, one after the other (excepting control-flow constructs, of course). And naturally, we can assume that every statement is evaluated in the context of all previously executed statements and that those previously executed statements may alter the environment and the behavior of the currently executing statement.
All of that is pretty much a foreign concept in a functional, lazy language like Haskell. The order in which computations are evaluated in Haskell is well-defined, but sometimes hard to predict, and even harder to control. And for many kinds of problems, that's just fine. But other sorts of problems (e.g. IO) are hard to solve without some convenient way to establish an implicit order and context between the computations in your program.
As far as side-effects go, specifically, often they can be transformed (via a Monad) in to simple state-passing, which is perfectly legal in a pure functional language. Some Monads don't seem to be of that nature, however. Monads such as the IO Monad or the ST monad literally perform side-effecting actions. There are many ways to think about this, but one way that I think about it is that just because my computations must exist in a world without side-effects, the Monad may not. As such, the Monad is free to establish a context for my computation to execute that is based on side-effects defined by other computations.
Finally, I must disclaim that I am definitely not a Haskell expert. As such, please understand that everything I've said is pretty much my own thoughts on this subject and I may very well disown them later when I understand Monads more fully.
the point is so there can be clean error handling in a chain of functions, containers, and side effects
More or less.
how exactly is the problem of side-effects solved?
A value in the I/O monad, i.e. one of type IO a, should be interpreted as a program. p >> q on IO values can then be interpreted as the operator that combines two programs into one that first executes p, then q. The other monad operators have similar interpretations. By assigning a program to the name main, you declare to the compiler that that is the program that has to be executed by its output object code.
As for the list monad, it's not really related to the I/O monad except in a very abstract mathematical sense. The IO monad gives deterministic computation with side effects, while the list monad gives non-deterministic (but not random!) backtracking search, somewhat similar to Prolog's modus operandi.
With this concept of containers, the language essentially says anything inside the containers is non-deterministic
No. Haskell is deterministic. If you ask for integer addition 2+2 you will always get 4.
"Nondeterministic" is only a metaphor, a way of thinking. Everything is deterministic under the hood. If you have this code:
do x <- [4,5]
y <- [0,1]
return (x+y)
it is roughly equivalent to Python code
l = []
for x in [4,5]:
for y in [0,1]:
l.append(x+y)
You see nondeterminism here? No, it's deterministic construction of a list. Run it twice, you'll get the same numbers in the same order.
You can describe it this way: Choose arbitrary x from [4,5]. Choose arbitrary y from [0,1]. Return x+y. Collect all possible results.
That way seems to involve nondeterminism, but it's only a nested loop (list comprehension). There is no "real" nondeterminism here, it's simulated by checking all possibilities. Nondeterminism is an illusion. The code only appears to be nondeterministic.
This code using State monad:
do put 0
x <- get
put (x+2)
y <- get
return (y+3)
gives 5 and seems to involve changing state. As with lists it's an illusion. There are no "variables" that change (as in imperative languages). Everything is nonmutable under the hood.
You can describe the code this way: put 0 to a variable. Read the value of a variable to x. Put (x+2) to the variable. Read the variable to y, and return y+3.
That way seems to involve state, but it's only composing functions passing additional parameter. There is no "real" mutability here, it's simulated by composition. Mutability is an illusion. The code only appears to be using it.
Haskell does it this way: you've got functions
a -> s -> (b,s)
This function takes and old value of state and returns new value. It does not involve mutability or change variables. It's a function in mathematical sense.
For example the function "put" takes new value of state, ignores current state and returns new state:
put x _ = ((), x)
Just like you can compose two normal functions
a -> b
b -> c
into
a -> c
using (.) operator you can compose "state" transformers
a -> s -> (b,s)
b -> s -> (c,s)
into a single function
a -> s -> (c,s)
Try writing the composition operator yourself. This is what really happens, there are no "side effects" only passing arguments to functions.
From what I understand, Monad is just another typeclass that declares ways to interact with data [...]
...providing an interface common to all those types which have an instance. This can then be used to provide generic definitions which work across all monadic types.
It seems clever and clean to implement these 3 things with one concept [...]
...the only three things that are implemented are the instances for those three types (list, Maybe and IO) - the types themselves are defined independently elsewhere.
[...] but really, the point is so there can be clean error handling in a chain of functions, containers, and side effects.
Not just error handling e.g. consider ST - without the monadic interface, you would have to pass the encapsulated-state directly and correctly...a tiresome task.
How exactly is the problem of side-effects solved?
Short answer: Haskell solves manages them by using types to indicate their presence.
Can someone explain how, intuitively, Haskell gets away with changing state with inputs and output?
"Intuitively"...like what's available over here? Let's try a simple direct comparison instead:
From How to Declare an Imperative by Philip Wadler:
(* page 26 *)
type 'a io = unit -> 'a
infix >>=
val >>= : 'a io * ('a -> 'b io) -> 'b io
fun m >>= k = fn () => let
val x = m ()
val y = k x ()
in
y
end
val return : 'a -> 'a io
fun return x = fn () => x
val putc : char -> unit io
fun putc c = fn () => putcML c
val getc : char io
val getc = fn () => getcML ()
fun getcML () =
valOf(TextIO.input1(TextIO.stdIn))
(* page 25 *)
fun putcML c =
TextIO.output1(TextIO.stdOut,c)
Based on these two answers of mine, this is my Haskell translation:
type IO a = OI -> a
(>>=) :: IO a -> (a -> IO b) -> IO b
m >>= k = \ u -> let !(u1, u2) = part u in
let !x = m u1 in
let !y = k x u2 in
y
return :: a -> IO a
return x = \ u -> let !_ = part u in x
putc :: Char -> IO ()
putc c = \ u -> putcOI c u
getc :: IO Char
getc = \ u -> getcOI u
-- primitives
data OI
partOI :: OI -> (OI, OI)
putcOI :: Char -> OI -> ()
getcOI :: OI -> Char
Now remember that short answer about side-effects?
Haskell manages them by using types to indicate their presence.
Data.Char.chr :: Int -> Char -- no side effects
getChar :: IO Char -- side effects at
{- :: OI -> Char -} -- work: beware!
Having briefly looked at Haskell recently, what would be a brief, succinct, practical explanation as to what a monad essentially is?
I have found most explanations I've come across to be fairly inaccessible and lacking in practical detail.
First: The term monad is a bit vacuous if you are not a mathematician. An alternative term is computation builder which is a bit more descriptive of what they are actually useful for.
They are a pattern for chaining operations. It looks a bit like method chaining in object-oriented languages, but the mechanism is slightly different.
The pattern is mostly used in functional languages (especially Haskell which uses monads pervasively) but can be used in any language which support higher-order functions (that is, functions which can take other functions as arguments).
Arrays in JavaScript support the pattern, so let’s use that as the first example.
The gist of the pattern is we have a type (Array in this case) which has a method which takes a function as argument. The operation supplied must return an instance of the same type (i.e. return an Array).
First an example of method chaining which does not use the monad pattern:
[1,2,3].map(x => x + 1)
The result is [2,3,4]. The code does not conform to the monad pattern, since the function we are supplying as an argument returns a number, not an Array. The same logic in monad form would be:
[1,2,3].flatMap(x => [x + 1])
Here we supply an operation which returns an Array, so now it conforms to the pattern. The flatMap method executes the provided function for every element in the array. It expects an array as result for each invocation (rather than single values), but merges the resulting set of arrays into a single array. So the end result is the same, the array [2,3,4].
(The function argument provided to a method like map or flatMap is often called a "callback" in JavaScript. I will call it the "operation" since it is more general.)
If we chain multiple operations (in the traditional way):
[1,2,3].map(a => a + 1).filter(b => b != 3)
Results in the array [2,4]
The same chaining in monad form:
[1,2,3].flatMap(a => [a + 1]).flatMap(b => b != 3 ? [b] : [])
Yields the same result, the array [2,4].
You will immediately notice that the monad form is quite a bit uglier than the non-monad! This just goes to show that monads are not necessarily “good”. They are a pattern which is sometimes beneficial and sometimes not.
Do note that the monad pattern can be combined in a different way:
[1,2,3].flatMap(a => [a + 1].flatMap(b => b != 3 ? [b] : []))
Here the binding is nested rather than chained, but the result is the same. This is an important property of monads as we will see later. It means two operations combined can be treated the same as a single operation.
The operation is allowed to return an array with different element types, for example transforming an array of numbers into an array of strings or something else; as long as it still an Array.
This can be described a bit more formally using Typescript notation. An array has the type Array<T>, where T is the type of the elements in the array. The method flatMap() takes a function argument of the type T => Array<U> and returns an Array<U>.
Generalized, a monad is any type Foo<Bar> which has a "bind" method which takes a function argument of type Bar => Foo<Baz> and returns a Foo<Baz>.
This answers what monads are. The rest of this answer will try to explain through examples why monads can be a useful pattern in a language like Haskell which has good support for them.
Haskell and Do-notation
To translate the map/filter example directly to Haskell, we replace flatMap with the >>= operator:
[1,2,3] >>= \a -> [a+1] >>= \b -> if b == 3 then [] else [b]
The >>= operator is the bind function in Haskell. It does the same as flatMap in JavaScript when the operand is a list, but it is overloaded with different meaning for other types.
But Haskell also has a dedicated syntax for monad expressions, the do-block, which hides the bind operator altogether:
do
a <- [1,2,3]
b <- [a+1]
if b == 3 then [] else [b]
This hides the "plumbing" and lets you focus on the actual operations applied at each step.
In a do-block, each line is an operation. The constraint still holds that all operations in the block must return the same type. Since the first expression is a list, the other operations must also return a list.
The back-arrow <- looks deceptively like an assignment, but note that this is the parameter passed in the bind. So, when the expression on the right side is a List of Integers, the variable on the left side will be a single Integer – but will be executed for each integer in the list.
Example: Safe navigation (the Maybe type)
Enough about lists, lets see how the monad pattern can be useful for other types.
Some functions may not always return a valid value. In Haskell this is represented by the Maybe-type, which is an option that is either Just value or Nothing.
Chaining operations which always return a valid value is of course straightforward:
streetName = getStreetName (getAddress (getUser 17))
But what if any of the functions could return Nothing? We need to check each result individually and only pass the value to the next function if it is not Nothing:
case getUser 17 of
Nothing -> Nothing
Just user ->
case getAddress user of
Nothing -> Nothing
Just address ->
getStreetName address
Quite a lot of repetitive checks! Imagine if the chain was longer. Haskell solves this with the monad pattern for Maybe:
do
user <- getUser 17
addr <- getAddress user
getStreetName addr
This do-block invokes the bind-function for the Maybe type (since the result of the first expression is a Maybe). The bind-function only executes the following operation if the value is Just value, otherwise it just passes the Nothing along.
Here the monad-pattern is used to avoid repetitive code. This is similar to how some other languages use macros to simplify syntax, although macros achieve the same goal in a very different way.
Note that it is the combination of the monad pattern and the monad-friendly syntax in Haskell which result in the cleaner code. In a language like JavaScript without any special syntax support for monads, I doubt the monad pattern would be able to simplify the code in this case.
Mutable state
Haskell does not support mutable state. All variables are constants and all values immutable. But the State type can be used to emulate programming with mutable state:
add2 :: State Integer Integer
add2 = do
-- add 1 to state
x <- get
put (x + 1)
-- increment in another way
modify (+1)
-- return state
get
evalState add2 7
=> 9
The add2 function builds a monad chain which is then evaluated with 7 as the initial state.
Obviously this is something which only makes sense in Haskell. Other languages support mutable state out of the box. Haskell is generally "opt-in" on language features - you enable mutable state when you need it, and the type system ensures the effect is explicit. IO is another example of this.
IO
The IO type is used for chaining and executing “impure” functions.
Like any other practical language, Haskell has a bunch of built-in functions which interface with the outside world: putStrLine, readLine and so on. These functions are called “impure” because they either cause side effects or have non-deterministic results. Even something simple like getting the time is considered impure because the result is non-deterministic – calling it twice with the same arguments may return different values.
A pure function is deterministic – its result depends purely on the arguments passed and it has no side effects on the environment beside returning a value.
Haskell heavily encourages the use of pure functions – this is a major selling point of the language. Unfortunately for purists, you need some impure functions to do anything useful. The Haskell compromise is to cleanly separate pure and impure, and guarantee that there is no way that pure functions can execute impure functions, directly or indirect.
This is guaranteed by giving all impure functions the IO type. The entry point in Haskell program is the main function which have the IO type, so we can execute impure functions at the top level.
But how does the language prevent pure functions from executing impure functions? This is due to the lazy nature of Haskell. A function is only executed if its output is consumed by some other function. But there is no way to consume an IO value except to assign it to main. So if a function wants to execute an impure function, it has to be connected to main and have the IO type.
Using monad chaining for IO operations also ensures that they are executed in a linear and predictable order, just like statements in an imperative language.
This brings us to the first program most people will write in Haskell:
main :: IO ()
main = do
putStrLn ”Hello World”
The do keyword is superfluous when there is only a single operation and therefore nothing to bind, but I keep it anyway for consistency.
The () type means “void”. This special return type is only useful for IO functions called for their side effect.
A longer example:
main = do
putStrLn "What is your name?"
name <- getLine
putStrLn ("hello" ++ name)
This builds a chain of IO operations, and since they are assigned to the main function, they get executed.
Comparing IO with Maybe shows the versatility of the monad pattern. For Maybe, the pattern is used to avoid repetitive code by moving conditional logic to the binding function. For IO, the pattern is used to ensure that all operations of the IO type are sequenced and that IO operations cannot "leak" to pure functions.
Summing up
In my subjective opinion, the monad pattern is only really worthwhile in a language which has some built-in support for the pattern. Otherwise it just leads to overly convoluted code. But Haskell (and some other languages) have some built-in support which hides the tedious parts, and then the pattern can be used for a variety of useful things. Like:
Avoiding repetitive code (Maybe)
Adding language features like mutable state or exceptions for delimited areas of the program.
Isolating icky stuff from nice stuff (IO)
Embedded domain-specific languages (Parser)
Adding GOTO to the language.
Explaining "what is a monad" is a bit like saying "what is a number?" We use numbers all the time. But imagine you met someone who didn't know anything about numbers. How the heck would you explain what numbers are? And how would you even begin to describe why that might be useful?
What is a monad? The short answer: It's a specific way of chaining operations together.
In essence, you're writing execution steps and linking them together with the "bind function". (In Haskell, it's named >>=.) You can write the calls to the bind operator yourself, or you can use syntax sugar which makes the compiler insert those function calls for you. But either way, each step is separated by a call to this bind function.
So the bind function is like a semicolon; it separates the steps in a process. The bind function's job is to take the output from the previous step, and feed it into the next step.
That doesn't sound too hard, right? But there is more than one kind of monad. Why? How?
Well, the bind function can just take the result from one step, and feed it to the next step. But if that's "all" the monad does... that actually isn't very useful. And that's important to understand: Every useful monad does something else in addition to just being a monad. Every useful monad has a "special power", which makes it unique.
(A monad that does nothing special is called the "identity monad". Rather like the identity function, this sounds like an utterly pointless thing, yet turns out not to be... But that's another story™.)
Basically, each monad has its own implementation of the bind function. And you can write a bind function such that it does hoopy things between execution steps. For example:
If each step returns a success/failure indicator, you can have bind execute the next step only if the previous one succeeded. In this way, a failing step aborts the whole sequence "automatically", without any conditional testing from you. (The Failure Monad.)
Extending this idea, you can implement "exceptions". (The Error Monad or Exception Monad.) Because you're defining them yourself rather than it being a language feature, you can define how they work. (E.g., maybe you want to ignore the first two exceptions and only abort when a third exception is thrown.)
You can make each step return multiple results, and have the bind function loop over them, feeding each one into the next step for you. In this way, you don't have to keep writing loops all over the place when dealing with multiple results. The bind function "automatically" does all that for you. (The List Monad.)
As well as passing a "result" from one step to another, you can have the bind function pass extra data around as well. This data now doesn't show up in your source code, but you can still access it from anywhere, without having to manually pass it to every function. (The Reader Monad.)
You can make it so that the "extra data" can be replaced. This allows you to simulate destructive updates, without actually doing destructive updates. (The State Monad and its cousin the Writer Monad.)
Because you're only simulating destructive updates, you can trivially do things that would be impossible with real destructive updates. For example, you can undo the last update, or revert to an older version.
You can make a monad where calculations can be paused, so you can pause your program, go in and tinker with internal state data, and then resume it.
You can implement "continuations" as a monad. This allows you to break people's minds!
All of this and more is possible with monads. Of course, all of this is also perfectly possible without monads too. It's just drastically easier using monads.
Actually, contrary to common understanding of Monads, they have nothing to do with state. Monads are simply a way to wrapping things and provide methods to do operations on the wrapped stuff without unwrapping it.
For example, you can create a type to wrap another one, in Haskell:
data Wrapped a = Wrap a
To wrap stuff we define
return :: a -> Wrapped a
return x = Wrap x
To perform operations without unwrapping, say you have a function f :: a -> b, then you can do this to lift that function to act on wrapped values:
fmap :: (a -> b) -> (Wrapped a -> Wrapped b)
fmap f (Wrap x) = Wrap (f x)
That's about all there is to understand. However, it turns out that there is a more general function to do this lifting, which is bind:
bind :: (a -> Wrapped b) -> (Wrapped a -> Wrapped b)
bind f (Wrap x) = f x
bind can do a bit more than fmap, but not vice versa. Actually, fmap can be defined only in terms of bind and return. So, when defining a monad.. you give its type (here it was Wrapped a) and then say how its return and bind operations work.
The cool thing is that this turns out to be such a general pattern that it pops up all over the place, encapsulating state in a pure way is only one of them.
For a good article on how monads can be used to introduce functional dependencies and thus control order of evaluation, like it is used in Haskell's IO monad, check out IO Inside.
As for understanding monads, don't worry too much about it. Read about them what you find interesting and don't worry if you don't understand right away. Then just diving in a language like Haskell is the way to go. Monads are one of these things where understanding trickles into your brain by practice, one day you just suddenly realize you understand them.
But, You could have invented Monads!
sigfpe says:
But all of these introduce monads as something esoteric in need of explanation. But what I want to argue is that they aren't esoteric at all. In fact, faced with various problems in functional programming you would have been led, inexorably, to certain solutions, all of which are examples of monads. In fact, I hope to get you to invent them now if you haven't already. It's then a small step to notice that all of these solutions are in fact the same solution in disguise. And after reading this, you might be in a better position to understand other documents on monads because you'll recognise everything you see as something you've already invented.
Many of the problems that monads try to solve are related to the issue of side effects. So we'll start with them. (Note that monads let you do more than handle side-effects, in particular many types of container object can be viewed as monads. Some of the introductions to monads find it hard to reconcile these two different uses of monads and concentrate on just one or the other.)
In an imperative programming language such as C++, functions behave nothing like the functions of mathematics. For example, suppose we have a C++ function that takes a single floating point argument and returns a floating point result. Superficially it might seem a little like a mathematical function mapping reals to reals, but a C++ function can do more than just return a number that depends on its arguments. It can read and write the values of global variables as well as writing output to the screen and receiving input from the user. In a pure functional language, however, a function can only read what is supplied to it in its arguments and the only way it can have an effect on the world is through the values it returns.
A monad is a datatype that has two operations: >>= (aka bind) and return (aka unit). return takes an arbitrary value and creates an instance of the monad with it. >>= takes an instance of the monad and maps a function over it. (You can see already that a monad is a strange kind of datatype, since in most programming languages you couldn't write a function that takes an arbitrary value and creates a type from it. Monads use a kind of parametric polymorphism.)
In Haskell notation, the monad interface is written
class Monad m where
return :: a -> m a
(>>=) :: forall a b . m a -> (a -> m b) -> m b
These operations are supposed to obey certain "laws", but that's not terrifically important: the "laws" just codify the way sensible implementations of the operations ought to behave (basically, that >>= and return ought to agree about how values get transformed into monad instances and that >>= is associative).
Monads are not just about state and I/O: they abstract a common pattern of computation that includes working with state, I/O, exceptions, and non-determinism. Probably the simplest monads to understand are lists and option types:
instance Monad [ ] where
[] >>= k = []
(x:xs) >>= k = k x ++ (xs >>= k)
return x = [x]
instance Monad Maybe where
Just x >>= k = k x
Nothing >>= k = Nothing
return x = Just x
where [] and : are the list constructors, ++ is the concatenation operator, and Just and Nothing are the Maybe constructors. Both of these monads encapsulate common and useful patterns of computation on their respective data types (note that neither has anything to do with side effects or I/O).
You really have to play around writing some non-trivial Haskell code to appreciate what monads are about and why they are useful.
You should first understand what a functor is. Before that, understand higher-order functions.
A higher-order function is simply a function that takes a function as an argument.
A functor is any type construction T for which there exists a higher-order function, call it map, that transforms a function of type a -> b (given any two types a and b) into a function T a -> T b. This map function must also obey the laws of identity and composition such that the following expressions return true for all p and q (Haskell notation):
map id = id
map (p . q) = map p . map q
For example, a type constructor called List is a functor if it comes equipped with a function of type (a -> b) -> List a -> List b which obeys the laws above. The only practical implementation is obvious. The resulting List a -> List b function iterates over the given list, calling the (a -> b) function for each element, and returns the list of the results.
A monad is essentially just a functor T with two extra methods, join, of type T (T a) -> T a, and unit (sometimes called return, fork, or pure) of type a -> T a. For lists in Haskell:
join :: [[a]] -> [a]
pure :: a -> [a]
Why is that useful? Because you could, for example, map over a list with a function that returns a list. Join takes the resulting list of lists and concatenates them. List is a monad because this is possible.
You can write a function that does map, then join. This function is called bind, or flatMap, or (>>=), or (=<<). This is normally how a monad instance is given in Haskell.
A monad has to satisfy certain laws, namely that join must be associative. This means that if you have a value x of type [[[a]]] then join (join x) should equal join (map join x). And pure must be an identity for join such that join (pure x) == x.
[Disclaimer: I am still trying to fully grok monads. The following is just what I have understood so far. If it’s wrong, hopefully someone knowledgeable will call me on the carpet.]
Arnar wrote:
Monads are simply a way to wrapping things and provide methods to do operations on the wrapped stuff without unwrapping it.
That’s precisely it. The idea goes like this:
You take some kind of value and wrap it with some additional information. Just like the value is of a certain kind (eg. an integer or a string), so the additional information is of a certain kind.
E.g., that extra information might be a Maybe or an IO.
Then you have some operators that allow you to operate on the wrapped data while carrying along that additional information. These operators use the additional information to decide how to change the behaviour of the operation on the wrapped value.
E.g., a Maybe Int can be a Just Int or Nothing. Now, if you add a Maybe Int to a Maybe Int, the operator will check to see if they are both Just Ints inside, and if so, will unwrap the Ints, pass them the addition operator, re-wrap the resulting Int into a new Just Int (which is a valid Maybe Int), and thus return a Maybe Int. But if one of them was a Nothing inside, this operator will just immediately return Nothing, which again is a valid Maybe Int. That way, you can pretend that your Maybe Ints are just normal numbers and perform regular math on them. If you were to get a Nothing, your equations will still produce the right result – without you having to litter checks for Nothing everywhere.
But the example is just what happens for Maybe. If the extra information was an IO, then that special operator defined for IOs would be called instead, and it could do something totally different before performing the addition. (OK, adding two IO Ints together is probably nonsensical – I’m not sure yet.) (Also, if you paid attention to the Maybe example, you have noticed that “wrapping a value with extra stuff” is not always correct. But it’s hard to be exact, correct and precise without being inscrutable.)
Basically, “monad” roughly means “pattern”. But instead of a book full of informally explained and specifically named Patterns, you now have a language construct – syntax and all – that allows you to declare new patterns as things in your program. (The imprecision here is all the patterns have to follow a particular form, so a monad is not quite as generic as a pattern. But I think that’s the closest term that most people know and understand.)
And that is why people find monads so confusing: because they are such a generic concept. To ask what makes something a monad is similarly vague as to ask what makes something a pattern.
But think of the implications of having syntactic support in the language for the idea of a pattern: instead of having to read the Gang of Four book and memorise the construction of a particular pattern, you just write code that implements this pattern in an agnostic, generic way once and then you are done! You can then reuse this pattern, like Visitor or Strategy or Façade or whatever, just by decorating the operations in your code with it, without having to re-implement it over and over!
So that is why people who understand monads find them so useful: it’s not some ivory tower concept that intellectual snobs pride themselves on understanding (OK, that too of course, teehee), but actually makes code simpler.
After much striving, I think I finally understand the monad. After rereading my own lengthy critique of the overwhelmingly top voted answer, I will offer this explanation.
There are three questions that need to be answered to understand monads:
Why do you need a monad?
What is a monad?
How is a monad implemented?
As I noted in my original comments, too many monad explanations get caught up in question number 3, without, and before really adequately covering question 2, or question 1.
Why do you need a monad?
Pure functional languages like Haskell are different from imperative languages like C, or Java in that, a pure functional program is not necessarily executed in a specific order, one step at a time. A Haskell program is more akin to a mathematical function, in which you may solve the "equation" in any number of potential orders. This confers a number of benefits, among which is that it eliminates the possibility of certain kinds of bugs, particularly those relating to things like "state".
However, there are certain problems that are not so straightforward to solve with this style of programming. Some things, like console programming, and file i/o, need things to happen in a particular order, or need to maintain state. One way to deal with this problem is to create a kind of object that represents the state of a computation, and a series of functions that take a state object as input, and return a new modified state object.
So let's create a hypothetical "state" value, that represents the state of a console screen. exactly how this value is constructed is not important, but let's say it's an array of byte length ascii characters that represents what is currently visible on the screen, and an array that represents the last line of input entered by the user, in pseudocode. We've defined some functions that take console state, modify it, and return a new console state.
consolestate MyConsole = new consolestate;
So to do console programming, but in a pure functional manner, you would need to nest a lot of function calls inside eachother.
consolestate FinalConsole = print(input(print(myconsole, "Hello, what's your name?")),"hello, %inputbuffer%!");
Programming in this way keeps the "pure" functional style, while forcing changes to the console to happen in a particular order. But, we'll probably want to do more than just a few operations at a time like in the above example. Nesting functions in that way will start to become ungainly. What we want, is code that does essentially the same thing as above, but is written a bit more like this:
consolestate FinalConsole = myconsole:
print("Hello, what's your name?"):
input():
print("hello, %inputbuffer%!");
This would indeed be a more convenient way to write it. How do we do that though?
What is a monad?
Once you have a type (such as consolestate) that you define along with a bunch of functions designed specifically to operate on that type, you can turn the whole package of these things into a "monad" by defining an operator like : (bind) that automatically feeds return values on its left, into function parameters on its right, and a lift operator that turns normal functions, into functions that work with that specific kind of bind operator.
How is a monad implemented?
See other answers, that seem quite free to jump into the details of that.
After giving an answer to this question a few years ago, I believe I can improve and simplify that response with...
A monad is a function composition technique that externalizes treatment for some input scenarios using a composing function, bind, to pre-process input during composition.
In normal composition, the function, compose (>>), is use to apply the composed function to the result of its predecessor in sequence. Importantly, the function being composed is required to handle all scenarios of its input.
(x -> y) >> (y -> z)
This design can be improved by restructuring the input so that relevant states are more easily interrogated. So, instead of simply y the value can become Mb such as, for instance, (is_OK, b) if y included a notion of validity.
For example, when the input is only possibly a number, instead of returning a string which can dutifully contain a number or not, you could restructure the type into a bool indicating the presence of a valid number and a number in tuple such as, bool * float. The composed functions would now no longer need to parse an input string to determine whether a number exists but could merely inspect the bool portion of a tuple.
(Ma -> Mb) >> (Mb -> Mc)
Here, again, composition occurs naturally with compose and so each function must handle all scenarios of its input individually, though doing so is now much easier.
However, what if we could externalize the effort of interrogation for those times where handling a scenario is routine. For example, what if our program does nothing when the input is not OK as in when is_OK is false. If that were done then composed functions would not need to handle that scenario themselves, dramatically simplifying their code and effecting another level of reuse.
To achieve this externalization we could use a function, bind (>>=), to perform the composition instead of compose. As such, instead of simply transferring values from the output of one function to the input of another Bind would inspect the M portion of Ma and decide whether and how to apply the composed function to the a. Of course, the function bind would be defined specifically for our particular M so as to be able to inspect its structure and perform whatever type of application we want. Nonetheless, the a can be anything since bind merely passes the a uninspected to the the composed function when it determines application necessary. Additionally, the composed functions themselves no longer need to deal with the M portion of the input structure either, simplifying them. Hence...
(a -> Mb) >>= (b -> Mc) or more succinctly Mb >>= (b -> Mc)
In short, a monad externalizes and thereby provides standard behaviour around the treatment of certain input scenarios once the input becomes designed to sufficiently expose them. This design is a shell and content model where the shell contains data relevant to the application of the composed function and is interrogated by and remains only available to the bind function.
Therefore, a monad is three things:
an M shell for holding monad relevant information,
a bind function implemented to make use of this shell information in its application of the composed functions to the content value(s) it finds within the shell, and
composable functions of the form, a -> Mb, producing results that include monadic management data.
Generally speaking, the input to a function is far more restrictive than its output which may include such things as error conditions; hence, the Mb result structure is generally very useful. For instance, the division operator does not return a number when the divisor is 0.
Additionally, monads may include wrap functions that wrap values, a, into the monadic type, Ma, and general functions, a -> b, into monadic functions, a -> Mb, by wrapping their results after application. Of course, like bind, such wrap functions are specific to M. An example:
let return a = [a]
let lift f a = return (f a)
The design of the bind function presumes immutable data structures and pure functions others things get complex and guarantees cannot be made. As such, there are monadic laws:
Given...
M_
return = (a -> Ma)
f = (a -> Mb)
g = (b -> Mc)
Then...
Left Identity : (return a) >>= f === f a
Right Identity : Ma >>= return === Ma
Associative : Ma >>= (f >>= g) === Ma >>= ((fun x -> f x) >>= g)
Associativity means that bind preserves the order of evaluation regardless of when bind is applied. That is, in the definition of Associativity above, the force early evaluation of the parenthesized binding of f and g will only result in a function that expects Ma in order to complete the bind. Hence the evaluation of Ma must be determined before its value can become applied to f and that result in turn applied to g.
A monad is, effectively, a form of "type operator". It will do three things. First it will "wrap" (or otherwise convert) a value of one type into another type (typically called a "monadic type"). Secondly it will make all the operations (or functions) available on the underlying type available on the monadic type. Finally it will provide support for combining its self with another monad to produce a composite monad.
The "maybe monad" is essentially the equivalent of "nullable types" in Visual Basic / C#. It takes a non nullable type "T" and converts it into a "Nullable<T>", and then defines what all the binary operators mean on a Nullable<T>.
Side effects are represented simillarly. A structure is created that holds descriptions of side effects alongside a function's return value. The "lifted" operations then copy around side effects as values are passed between functions.
They are called "monads" rather than the easier-to-grasp name of "type operators" for several reasons:
Monads have restrictions on what they can do (see the definiton for details).
Those restrictions, along with the fact that there are three operations involved, conform to the structure of something called a monad in Category Theory, which is an obscure branch of mathematics.
They were designed by proponents of "pure" functional languages
Proponents of pure functional languages like obscure branches of mathematics
Because the math is obscure, and monads are associated with particular styles of programming, people tend to use the word monad as a sort of secret handshake. Because of this no one has bothered to invest in a better name.
(See also the answers at What is a monad?)
A good motivation to Monads is sigfpe (Dan Piponi)'s You Could Have Invented Monads! (And Maybe You Already Have). There are a LOT of other monad tutorials, many of which misguidedly try to explain monads in "simple terms" using various analogies: this is the monad tutorial fallacy; avoid them.
As DR MacIver says in Tell us why your language sucks:
So, things I hate about Haskell:
Let’s start with the obvious. Monad tutorials. No, not monads. Specifically the tutorials. They’re endless, overblown and dear god are they tedious. Further, I’ve never seen any convincing evidence that they actually help. Read the class definition, write some code, get over the scary name.
You say you understand the Maybe monad? Good, you're on your way. Just start using other monads and sooner or later you'll understand what monads are in general.
[If you are mathematically oriented, you might want to ignore the dozens of tutorials and learn the definition, or follow lectures in category theory :)
The main part of the definition is that a Monad M involves a "type constructor" that defines for each existing type "T" a new type "M T", and some ways for going back and forth between "regular" types and "M" types.]
Also, surprisingly enough, one of the best introductions to monads is actually one of the early academic papers introducing monads, Philip Wadler's Monads for functional programming. It actually has practical, non-trivial motivating examples, unlike many of the artificial tutorials out there.
Monads are to control flow what abstract data types are to data.
In other words, many developers are comfortable with the idea of Sets, Lists, Dictionaries (or Hashes, or Maps), and Trees. Within those data types there are many special cases (for instance InsertionOrderPreservingIdentityHashMap).
However, when confronted with program "flow" many developers haven't been exposed to many more constructs than if, switch/case, do, while, goto (grr), and (maybe) closures.
So, a monad is simply a control flow construct. A better phrase to replace monad would be 'control type'.
As such, a monad has slots for control logic, or statements, or functions - the equivalent in data structures would be to say that some data structures allow you to add data, and remove it.
For example, the "if" monad:
if( clause ) then block
at its simplest has two slots - a clause, and a block. The if monad is usually built to evaluate the result of the clause, and if not false, evaluate the block. Many developers are not introduced to monads when they learn 'if', and it just isn't necessary to understand monads to write effective logic.
Monads can become more complicated, in the same way that data structures can become more complicated, but there are many broad categories of monad that may have similar semantics, but differing implementations and syntax.
Of course, in the same way that data structures may be iterated over, or traversed, monads may be evaluated.
Compilers may or may not have support for user-defined monads. Haskell certainly does. Ioke has some similar capabilities, although the term monad is not used in the language.
My favorite Monad tutorial:
http://www.haskell.org/haskellwiki/All_About_Monads
(out of 170,000 hits on a Google search for "monad tutorial"!)
#Stu: The point of monads is to allow you to add (usually) sequential semantics to otherwise pure code; you can even compose monads (using Monad Transformers) and get more interesting and complicated combined semantics, like parsing with error handling, shared state, and logging, for example. All of this is possible in pure code, monads just allow you to abstract it away and reuse it in modular libraries (always good in programming), as well as providing convenient syntax to make it look imperative.
Haskell already has operator overloading[1]: it uses type classes much the way one might use interfaces in Java or C# but Haskell just happens to also allow non-alphanumeric tokens like + && and > as infix identifiers. It's only operator overloading in your way of looking at it if you mean "overloading the semicolon" [2]. It sounds like black magic and asking for trouble to "overload the semicolon" (picture enterprising Perl hackers getting wind of this idea) but the point is that without monads there is no semicolon, since purely functional code does not require or allow explicit sequencing.
This all sounds much more complicated than it needs to. sigfpe's article is pretty cool but uses Haskell to explain it, which sort of fails to break the chicken and egg problem of understanding Haskell to grok Monads and understanding Monads to grok Haskell.
[1] This is a separate issue from monads but monads use Haskell's operator overloading feature.
[2] This is also an oversimplification since the operator for chaining monadic actions is >>= (pronounced "bind") but there is syntactic sugar ("do") that lets you use braces and semicolons and/or indentation and newlines.
I am still new to monads, but I thought I would share a link I found that felt really good to read (WITH PICTURES!!):
http://www.matusiak.eu/numerodix/blog/2012/3/11/monads-for-the-layman/
(no affiliation)
Basically, the warm and fuzzy concept that I got from the article was the concept that monads are basically adapters that allow disparate functions to work in a composable fashion, i.e. be able to string up multiple functions and mix and match them without worrying about inconsistent return types and such. So the BIND function is in charge of keeping apples with apples and oranges with oranges when we're trying to make these adapters. And the LIFT function is in charge of taking "lower level" functions and "upgrading" them to work with BIND functions and be composable as well.
I hope I got it right, and more importantly, hope that the article has a valid view on monads. If nothing else, this article helped whet my appetite for learning more about monads.
I've been thinking of Monads in a different way, lately. I've been thinking of them as abstracting out execution order in a mathematical way, which makes new kinds of polymorphism possible.
If you're using an imperative language, and you write some expressions in order, the code ALWAYS runs exactly in that order.
And in the simple case, when you use a monad, it feels the same -- you define a list of expressions that happen in order. Except that, depending on which monad you use, your code might run in order (like in IO monad), in parallel over several items at once (like in the List monad), it might halt partway through (like in the Maybe monad), it might pause partway through to be resumed later (like in a Resumption monad), it might rewind and start from the beginning (like in a Transaction monad), or it might rewind partway to try other options (like in a Logic monad).
And because monads are polymorphic, it's possible to run the same code in different monads, depending on your needs.
Plus, in some cases, it's possible to combine monads together (with monad transformers) to get multiple features at the same time.
tl;dr
{-# LANGUAGE InstanceSigs #-}
newtype Id t = Id t
instance Monad Id where
return :: t -> Id t
return = Id
(=<<) :: (a -> Id b) -> Id a -> Id b
f =<< (Id x) = f x
Prologue
The application operator $ of functions
forall a b. a -> b
is canonically defined
($) :: (a -> b) -> a -> b
f $ x = f x
infixr 0 $
in terms of Haskell-primitive function application f x (infixl 10).
Composition . is defined in terms of $ as
(.) :: (b -> c) -> (a -> b) -> (a -> c)
f . g = \ x -> f $ g x
infixr 9 .
and satisfies the equivalences forall f g h.
f . id = f :: c -> d Right identity
id . g = g :: b -> c Left identity
(f . g) . h = f . (g . h) :: a -> d Associativity
. is associative, and id is its right and left identity.
The Kleisli triple
In programming, a monad is a functor type constructor with an instance of the monad type class. There are several equivalent variants of definition and implementation, each carrying slightly different intuitions about the monad abstraction.
A functor is a type constructor f of kind * -> * with an instance of the functor type class.
{-# LANGUAGE KindSignatures #-}
class Functor (f :: * -> *) where
map :: (a -> b) -> (f a -> f b)
In addition to following statically enforced type protocol, instances of the functor type class must obey the algebraic functor laws forall f g.
map id = id :: f t -> f t Identity
map f . map g = map (f . g) :: f a -> f c Composition / short cut fusion
Functor computations have the type
forall f t. Functor f => f t
A computation c r consists in results r within context c.
Unary monadic functions or Kleisli arrows have the type
forall m a b. Functor m => a -> m b
Kleisi arrows are functions that take one argument a and return a monadic computation m b.
Monads are canonically defined in terms of the Kleisli triple forall m. Functor m =>
(m, return, (=<<))
implemented as the type class
class Functor m => Monad m where
return :: t -> m t
(=<<) :: (a -> m b) -> m a -> m b
infixr 1 =<<
The Kleisli identity return is a Kleisli arrow that promotes a value t into monadic context m. Extension or Kleisli application =<< applies a Kleisli arrow a -> m b to results of a computation m a.
Kleisli composition <=< is defined in terms of extension as
(<=<) :: Monad m => (b -> m c) -> (a -> m b) -> (a -> m c)
f <=< g = \ x -> f =<< g x
infixr 1 <=<
<=< composes two Kleisli arrows, applying the left arrow to results of the right arrow’s application.
Instances of the monad type class must obey the monad laws, most elegantly stated in terms of Kleisli composition: forall f g h.
f <=< return = f :: c -> m d Right identity
return <=< g = g :: b -> m c Left identity
(f <=< g) <=< h = f <=< (g <=< h) :: a -> m d Associativity
<=< is associative, and return is its right and left identity.
Identity
The identity type
type Id t = t
is the identity function on types
Id :: * -> *
Interpreted as a functor,
return :: t -> Id t
= id :: t -> t
(=<<) :: (a -> Id b) -> Id a -> Id b
= ($) :: (a -> b) -> a -> b
(<=<) :: (b -> Id c) -> (a -> Id b) -> (a -> Id c)
= (.) :: (b -> c) -> (a -> b) -> (a -> c)
In canonical Haskell, the identity monad is defined
newtype Id t = Id t
instance Functor Id where
map :: (a -> b) -> Id a -> Id b
map f (Id x) = Id (f x)
instance Monad Id where
return :: t -> Id t
return = Id
(=<<) :: (a -> Id b) -> Id a -> Id b
f =<< (Id x) = f x
Option
An option type
data Maybe t = Nothing | Just t
encodes computation Maybe t that not necessarily yields a result t, computation that may “fail”. The option monad is defined
instance Functor Maybe where
map :: (a -> b) -> (Maybe a -> Maybe b)
map f (Just x) = Just (f x)
map _ Nothing = Nothing
instance Monad Maybe where
return :: t -> Maybe t
return = Just
(=<<) :: (a -> Maybe b) -> Maybe a -> Maybe b
f =<< (Just x) = f x
_ =<< Nothing = Nothing
a -> Maybe b is applied to a result only if Maybe a yields a result.
newtype Nat = Nat Int
The natural numbers can be encoded as those integers greater than or equal to zero.
toNat :: Int -> Maybe Nat
toNat i | i >= 0 = Just (Nat i)
| otherwise = Nothing
The natural numbers are not closed under subtraction.
(-?) :: Nat -> Nat -> Maybe Nat
(Nat n) -? (Nat m) = toNat (n - m)
infixl 6 -?
The option monad covers a basic form of exception handling.
(-? 20) <=< toNat :: Int -> Maybe Nat
List
The list monad, over the list type
data [] t = [] | t : [t]
infixr 5 :
and its additive monoid operation “append”
(++) :: [t] -> [t] -> [t]
(x : xs) ++ ys = x : xs ++ ys
[] ++ ys = ys
infixr 5 ++
encodes nonlinear computation [t] yielding a natural amount 0, 1, ... of results t.
instance Functor [] where
map :: (a -> b) -> ([a] -> [b])
map f (x : xs) = f x : map f xs
map _ [] = []
instance Monad [] where
return :: t -> [t]
return = (: [])
(=<<) :: (a -> [b]) -> [a] -> [b]
f =<< (x : xs) = f x ++ (f =<< xs)
_ =<< [] = []
Extension =<< concatenates ++ all lists [b] resulting from applications f x of a Kleisli arrow a -> [b] to elements of [a] into a single result list [b].
Let the proper divisors of a positive integer n be
divisors :: Integral t => t -> [t]
divisors n = filter (`divides` n) [2 .. n - 1]
divides :: Integral t => t -> t -> Bool
(`divides` n) = (== 0) . (n `rem`)
then
forall n. let { f = f <=< divisors } in f n = []
In defining the monad type class, instead of extension =<<, the Haskell standard uses its flip, the bind operator >>=.
class Applicative m => Monad m where
(>>=) :: forall a b. m a -> (a -> m b) -> m b
(>>) :: forall a b. m a -> m b -> m b
m >> k = m >>= \ _ -> k
{-# INLINE (>>) #-}
return :: a -> m a
return = pure
For simplicity's sake, this explanation uses the type class hierarchy
class Functor f
class Functor m => Monad m
In Haskell, the current standard hierarchy is
class Functor f
class Functor p => Applicative p
class Applicative m => Monad m
because not only is every monad a functor, but every applicative is a functor and every monad is an applicative, too.
Using the list monad, the imperative pseudocode
for a in (1, ..., 10)
for b in (1, ..., 10)
p <- a * b
if even(p)
yield p
roughly translates to the do block,
do a <- [1 .. 10]
b <- [1 .. 10]
let p = a * b
guard (even p)
return p
the equivalent monad comprehension,
[ p | a <- [1 .. 10], b <- [1 .. 10], let p = a * b, even p ]
and the expression
[1 .. 10] >>= (\ a ->
[1 .. 10] >>= (\ b ->
let p = a * b in
guard (even p) >> -- [ () | even p ] >>
return p
)
)
Do notation and monad comprehensions are syntactic sugar for nested bind expressions. The bind operator is used for local name binding of monadic results.
let x = v in e = (\ x -> e) $ v = v & (\ x -> e)
do { r <- m; c } = (\ r -> c) =<< m = m >>= (\ r -> c)
where
(&) :: a -> (a -> b) -> b
(&) = flip ($)
infixl 0 &
The guard function is defined
guard :: Additive m => Bool -> m ()
guard True = return ()
guard False = fail
where the unit type or “empty tuple”
data () = ()
Additive monads that support choice and failure can be abstracted over using a type class
class Monad m => Additive m where
fail :: m t
(<|>) :: m t -> m t -> m t
infixl 3 <|>
instance Additive Maybe where
fail = Nothing
Nothing <|> m = m
m <|> _ = m
instance Additive [] where
fail = []
(<|>) = (++)
where fail and <|> form a monoid forall k l m.
k <|> fail = k
fail <|> l = l
(k <|> l) <|> m = k <|> (l <|> m)
and fail is the absorbing/annihilating zero element of additive monads
_ =<< fail = fail
If in
guard (even p) >> return p
even p is true, then the guard produces [()], and, by the definition of >>, the local constant function
\ _ -> return p
is applied to the result (). If false, then the guard produces the list monad’s fail ( [] ), which yields no result for a Kleisli arrow to be applied >> to, so this p is skipped over.
State
Infamously, monads are used to encode stateful computation.
A state processor is a function
forall st t. st -> (t, st)
that transitions a state st and yields a result t. The state st can be anything. Nothing, flag, count, array, handle, machine, world.
The type of state processors is usually called
type State st t = st -> (t, st)
The state processor monad is the kinded * -> * functor State st. Kleisli arrows of the state processor monad are functions
forall st a b. a -> (State st) b
In canonical Haskell, the lazy version of the state processor monad is defined
newtype State st t = State { stateProc :: st -> (t, st) }
instance Functor (State st) where
map :: (a -> b) -> ((State st) a -> (State st) b)
map f (State p) = State $ \ s0 -> let (x, s1) = p s0
in (f x, s1)
instance Monad (State st) where
return :: t -> (State st) t
return x = State $ \ s -> (x, s)
(=<<) :: (a -> (State st) b) -> (State st) a -> (State st) b
f =<< (State p) = State $ \ s0 -> let (x, s1) = p s0
in stateProc (f x) s1
A state processor is run by supplying an initial state:
run :: State st t -> st -> (t, st)
run = stateProc
eval :: State st t -> st -> t
eval = fst . run
exec :: State st t -> st -> st
exec = snd . run
State access is provided by primitives get and put, methods of abstraction over stateful monads:
{-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies #-}
class Monad m => Stateful m st | m -> st where
get :: m st
put :: st -> m ()
m -> st declares a functional dependency of the state type st on the monad m; that a State t, for example, will determine the state type to be t uniquely.
instance Stateful (State st) st where
get :: State st st
get = State $ \ s -> (s, s)
put :: st -> State st ()
put s = State $ \ _ -> ((), s)
with the unit type used analogously to void in C.
modify :: Stateful m st => (st -> st) -> m ()
modify f = do
s <- get
put (f s)
gets :: Stateful m st => (st -> t) -> m t
gets f = do
s <- get
return (f s)
gets is often used with record field accessors.
The state monad equivalent of the variable threading
let s0 = 34
s1 = (+ 1) s0
n = (* 12) s1
s2 = (+ 7) s1
in (show n, s2)
where s0 :: Int, is the equally referentially transparent, but infinitely more elegant and practical
(flip run) 34
(do
modify (+ 1)
n <- gets (* 12)
modify (+ 7)
return (show n)
)
modify (+ 1) is a computation of type State Int (), except for its effect equivalent to return ().
(flip run) 34
(modify (+ 1) >>
gets (* 12) >>= (\ n ->
modify (+ 7) >>
return (show n)
)
)
The monad law of associativity can be written in terms of >>= forall m f g.
(m >>= f) >>= g = m >>= (\ x -> f x >>= g)
or
do { do { do {
r1 <- do { x <- m; r0 <- m;
r0 <- m; = do { = r1 <- f r0;
f r0 r1 <- f x; g r1
}; g r1 }
g r1 }
} }
Like in expression-oriented programming (e.g. Rust), the last statement of a block represents its yield. The bind operator is sometimes called a “programmable semicolon”.
Iteration control structure primitives from structured imperative programming are emulated monadically
for :: Monad m => (a -> m b) -> [a] -> m ()
for f = foldr ((>>) . f) (return ())
while :: Monad m => m Bool -> m t -> m ()
while c m = do
b <- c
if b then m >> while c m
else return ()
forever :: Monad m => m t
forever m = m >> forever m
Input/Output
data World
The I/O world state processor monad is a reconciliation of pure Haskell and the real world, of functional denotative and imperative operational semantics. A close analogue of the actual strict implementation:
type IO t = World -> (t, World)
Interaction is facilitated by impure primitives
getChar :: IO Char
putChar :: Char -> IO ()
readFile :: FilePath -> IO String
writeFile :: FilePath -> String -> IO ()
hSetBuffering :: Handle -> BufferMode -> IO ()
hTell :: Handle -> IO Integer
. . . . . .
The impurity of code that uses IO primitives is permanently protocolized by the type system. Because purity is awesome, what happens in IO, stays in IO.
unsafePerformIO :: IO t -> t
Or, at least, should.
The type signature of a Haskell program
main :: IO ()
main = putStrLn "Hello, World!"
expands to
World -> ((), World)
A function that transforms a world.
Epilogue
The category whiches objects are Haskell types and whiches morphisms are functions between Haskell types is, “fast and loose”, the category Hask.
A functor T is a mapping from a category C to a category D; for each object in C an object in D
Tobj : Obj(C) -> Obj(D)
f :: * -> *
and for each morphism in C a morphism in D
Tmor : HomC(X, Y) -> HomD(Tobj(X), Tobj(Y))
map :: (a -> b) -> (f a -> f b)
where X, Y are objects in C. HomC(X, Y) is the homomorphism class of all morphisms X -> Y in C. The functor must preserve morphism identity and composition, the “structure” of C, in D.
Tmor Tobj
T(id) = id : T(X) -> T(X) Identity
T(f) . T(g) = T(f . g) : T(X) -> T(Z) Composition
The Kleisli category of a category C is given by a Kleisli triple
<T, eta, _*>
of an endofunctor
T : C -> C
(f), an identity morphism eta (return), and an extension operator * (=<<).
Each Kleisli morphism in Hask
f : X -> T(Y)
f :: a -> m b
by the extension operator
(_)* : Hom(X, T(Y)) -> Hom(T(X), T(Y))
(=<<) :: (a -> m b) -> (m a -> m b)
is given a morphism in Hask’s Kleisli category
f* : T(X) -> T(Y)
(f =<<) :: m a -> m b
Composition in the Kleisli category .T is given in terms of extension
f .T g = f* . g : X -> T(Z)
f <=< g = (f =<<) . g :: a -> m c
and satisfies the category axioms
eta .T g = g : Y -> T(Z) Left identity
return <=< g = g :: b -> m c
f .T eta = f : Z -> T(U) Right identity
f <=< return = f :: c -> m d
(f .T g) .T h = f .T (g .T h) : X -> T(U) Associativity
(f <=< g) <=< h = f <=< (g <=< h) :: a -> m d
which, applying the equivalence transformations
eta .T g = g
eta* . g = g By definition of .T
eta* . g = id . g forall f. id . f = f
eta* = id forall f g h. f . h = g . h ==> f = g
(f .T g) .T h = f .T (g .T h)
(f* . g)* . h = f* . (g* . h) By definition of .T
(f* . g)* . h = f* . g* . h . is associative
(f* . g)* = f* . g* forall f g h. f . h = g . h ==> f = g
in terms of extension are canonically given
eta* = id : T(X) -> T(X) Left identity
(return =<<) = id :: m t -> m t
f* . eta = f : Z -> T(U) Right identity
(f =<<) . return = f :: c -> m d
(f* . g)* = f* . g* : T(X) -> T(Z) Associativity
(((f =<<) . g) =<<) = (f =<<) . (g =<<) :: m a -> m c
Monads can also be defined in terms not of Kleislian extension, but a natural transformation mu, in programming called join. A monad is defined in terms of mu as a triple over a category C, of an endofunctor
T : C -> C
f :: * -> *
and two natural tranformations
eta : Id -> T
return :: t -> f t
mu : T . T -> T
join :: f (f t) -> f t
satisfying the equivalences
mu . T(mu) = mu . mu : T . T . T -> T . T Associativity
join . map join = join . join :: f (f (f t)) -> f t
mu . T(eta) = mu . eta = id : T -> T Identity
join . map return = join . return = id :: f t -> f t
The monad type class is then defined
class Functor m => Monad m where
return :: t -> m t
join :: m (m t) -> m t
The canonical mu implementation of the option monad:
instance Monad Maybe where
return = Just
join (Just m) = m
join Nothing = Nothing
The concat function
concat :: [[a]] -> [a]
concat (x : xs) = x ++ concat xs
concat [] = []
is the join of the list monad.
instance Monad [] where
return :: t -> [t]
return = (: [])
(=<<) :: (a -> [b]) -> ([a] -> [b])
(f =<<) = concat . map f
Implementations of join can be translated from extension form using the equivalence
mu = id* : T . T -> T
join = (id =<<) :: m (m t) -> m t
The reverse translation from mu to extension form is given by
f* = mu . T(f) : T(X) -> T(Y)
(f =<<) = join . map f :: m a -> m b
Philip Wadler: Monads for functional programming
Simon L Peyton Jones, Philip Wadler: Imperative functional programming
Jonathan M. D. Hill, Keith Clarke: An introduction to category theory, category theory monads, and their relationship to functional programming
´
Kleisli category
Eugenio Moggi: Notions of computation and monads
What a monad is not
But why should a theory so abstract be of any use for programming?
The answer is simple: as computer scientists, we value abstraction! When we design the interface to a software component, we want it to reveal as little as possible about the implementation. We want to be able to replace the implementation with many alternatives, many other ‘instances’ of the same ‘concept’. When we design a generic interface to many program libraries, it is even more important that the interface we choose have a variety of implementations. It is the generality of the monad concept which we value so highly, it is because category theory is so abstract that its concepts are so useful for programming.
It is hardly suprising, then, that the generalisation of monads that we present below also has a close connection to category theory. But we stress that our purpose is very practical: it is not to ‘implement category theory’, it is to find a more general way to structure combinator libraries. It is simply our good fortune that mathematicians have already done much of the work for us!
from Generalising Monads to Arrows by John Hughes
Monads Are Not Metaphors, but a practically useful abstraction emerging from a common pattern, as Daniel Spiewak explains.
In addition to the excellent answers above, let me offer you a link to the following article (by Patrick Thomson) which explains monads by relating the concept to the JavaScript library jQuery (and its way of using "method chaining" to manipulate the DOM):
jQuery is a Monad
The jQuery documentation itself doesn't refer to the term "monad" but talks about the "builder pattern" which is probably more familiar. This doesn't change the fact that you have a proper monad there maybe without even realizing it.
A monad is a way of combining computations together that share a common context. It is like building a network of pipes. When constructing the network, there is no data flowing through it. But when I have finished piecing all the bits together with 'bind' and 'return' then I invoke something like runMyMonad monad data and the data flows through the pipes.
In practice, monad is a custom implementation of function composition operator that takes care of side effects and incompatible input and return values (for chaining).
The two things that helped me best when learning about there were:
Chapter 8, "Functional Parsers," from Graham Hutton's book Programming in Haskell. This doesn't mention monads at all, actually, but if you can work through chapter and really understand everything in it, particularly how a sequence of bind operations is evaluated, you'll understand the internals of monads. Expect this to take several tries.
The tutorial All About Monads. This gives several good examples of their use, and I have to say that the analogy in Appendex I worked for me.
Monoid appears to be something that ensures that all operations defined on a Monoid and a supported type will always return a supported type inside the Monoid. Eg, Any number + Any number = A number, no errors.
Whereas division accepts two fractionals, and returns a fractional, which defined division by zero as Infinity in haskell somewhy(which happens to be a fractional somewhy)...
In any case, it appears Monads are just a way to ensure that your chain of operations behaves in a predictable way, and a function that claims to be Num -> Num, composed with another function of Num->Num called with x does not say, fire the missiles.
On the other hand, if we have a function which does fire the missiles, we can compose it with other functions which also fire the missiles, because our intent is clear -- we want to fire the missiles -- but it won't try printing "Hello World" for some odd reason.
In Haskell, main is of type IO (), or IO [()], the distiction is strange and I will not discuss it but here's what I think happens:
If I have main, I want it to do a chain of actions, the reason I run the program is to produce an effect -- usually though IO. Thus I can chain IO operations together in main in order to -- do IO, nothing else.
If I try to do something which does not "return IO", the program will complain that the chain does not flow, or basically "How does this relate to what we are trying to do -- an IO action", it appears to force the programmer to keep their train of thought, without straying off and thinking about firing the missiles, while creating algorithms for sorting -- which does not flow.
Basically, Monads appear to be a tip to the compiler that "hey, you know this function that returns a number here, it doesn't actually always work, it can sometimes produce a Number, and sometimes Nothing at all, just keep this in mind". Knowing this, if you try to assert a monadic action, the monadic action may act as a compile time exception saying "hey, this isn't actually a number, this CAN be a number, but you can't assume this, do something to ensure that the flow is acceptable." which prevents unpredictable program behavior -- to a fair extent.
It appears monads are not about purity, nor control, but about maintaining an identity of a category on which all behavior is predictable and defined, or does not compile. You cannot do nothing when you are expected to do something, and you cannot do something if you are expected to do nothing (visible).
The biggest reason I could think of for Monads is -- go look at Procedural/OOP code, and you will notice that you do not know where the program starts, nor ends, all you see is a lot of jumping and a lot of math,magic,and missiles. You will not be able to maintain it, and if you can, you will spend quite a lot of time wrapping your mind around the whole program before you can understand any part of it, because modularity in this context is based on interdependant "sections" of code, where code is optimized to be as related as possible for promise of efficiency/inter-relation. Monads are very concrete, and well defined by definition, and ensure that the flow of program is possible to analyze, and isolate parts which are hard to analyze -- as they themselves are monads. A monad appears to be a "comprehensible unit which is predictable upon its full understanding" -- If you understand "Maybe" monad, there's no possible way it will do anything except be "Maybe", which appears trivial, but in most non monadic code, a simple function "helloworld" can fire the missiles, do nothing, or destroy the universe or even distort time -- we have no idea nor have any guarantees that IT IS WHAT IT IS. A monad GUARANTEES that IT IS WHAT IT IS. which is very powerful.
All things in "real world" appear to be monads, in the sense that it is bound by definite observable laws preventing confusion. This does not mean we have to mimic all the operations of this object to create classes, instead we can simply say "a square is a square", nothing but a square, not even a rectangle nor a circle, and "a square has area of the length of one of it's existing dimensions multiplied by itself. No matter what square you have, if it's a square in 2D space, it's area absolutely cannot be anything but its length squared, it's almost trivial to prove. This is very powerful because we do not need to make assertions to make sure that our world is the way it is, we just use implications of reality to prevent our programs from falling off track.
Im pretty much guaranteed to be wrong but I think this could help somebody out there, so hopefully it helps somebody.
In the context of Scala you will find the following to be the simplest definition. Basically flatMap (or bind) is 'associative' and there exists an identity.
trait M[+A] {
def flatMap[B](f: A => M[B]): M[B] // AKA bind
// Pseudo Meta Code
def isValidMonad: Boolean = {
// for every parameter the following holds
def isAssociativeOn[X, Y, Z](x: M[X], f: X => M[Y], g: Y => M[Z]): Boolean =
x.flatMap(f).flatMap(g) == x.flatMap(f(_).flatMap(g))
// for every parameter X and x, there exists an id
// such that the following holds
def isAnIdentity[X](x: M[X], id: X => M[X]): Boolean =
x.flatMap(id) == x
}
}
E.g.
// These could be any functions
val f: Int => Option[String] = number => if (number == 7) Some("hello") else None
val g: String => Option[Double] = string => Some(3.14)
// Observe these are identical. Since Option is a Monad
// they will always be identical no matter what the functions are
scala> Some(7).flatMap(f).flatMap(g)
res211: Option[Double] = Some(3.14)
scala> Some(7).flatMap(f(_).flatMap(g))
res212: Option[Double] = Some(3.14)
// As Option is a Monad, there exists an identity:
val id: Int => Option[Int] = x => Some(x)
// Observe these are identical
scala> Some(7).flatMap(id)
res213: Option[Int] = Some(7)
scala> Some(7)
res214: Some[Int] = Some(7)
NOTE Strictly speaking the definition of a Monad in functional programming is not the same as the definition of a Monad in Category Theory, which is defined in turns of map and flatten. Though they are kind of equivalent under certain mappings. This presentations is very good: http://www.slideshare.net/samthemonad/monad-presentation-scala-as-a-category
This answer begins with a motivating example, works through the example, derives an example of a monad, and formally defines "monad".
Consider these three functions in pseudocode:
f(<x, messages>) := <x, messages "called f. ">
g(<x, messages>) := <x, messages "called g. ">
wrap(x) := <x, "">
f takes an ordered pair of the form <x, messages> and returns an ordered pair. It leaves the first item untouched and appends "called f. " to the second item. Same with g.
You can compose these functions and get your original value, along with a string that shows which order the functions were called in:
f(g(wrap(x)))
= f(g(<x, "">))
= f(<x, "called g. ">)
= <x, "called g. called f. ">
You dislike the fact that f and g are responsible for appending their own log messages to the previous logging information. (Just imagine for the sake of argument that instead of appending strings, f and g must perform complicated logic on the second item of the pair. It would be a pain to repeat that complicated logic in two -- or more -- different functions.)
You prefer to write simpler functions:
f(x) := <x, "called f. ">
g(x) := <x, "called g. ">
wrap(x) := <x, "">
But look at what happens when you compose them:
f(g(wrap(x)))
= f(g(<x, "">))
= f(<<x, "">, "called g. ">)
= <<<x, "">, "called g. ">, "called f. ">
The problem is that passing a pair into a function does not give you what you want. But what if you could feed a pair into a function:
feed(f, feed(g, wrap(x)))
= feed(f, feed(g, <x, "">))
= feed(f, <x, "called g. ">)
= <x, "called g. called f. ">
Read feed(f, m) as "feed m into f". To feed a pair <x, messages> into a function f is to pass x into f, get <y, message> out of f, and return <y, messages message>.
feed(f, <x, messages>) := let <y, message> = f(x)
in <y, messages message>
Notice what happens when you do three things with your functions:
First: if you wrap a value and then feed the resulting pair into a function:
feed(f, wrap(x))
= feed(f, <x, "">)
= let <y, message> = f(x)
in <y, "" message>
= let <y, message> = <x, "called f. ">
in <y, "" message>
= <x, "" "called f. ">
= <x, "called f. ">
= f(x)
That is the same as passing the value into the function.
Second: if you feed a pair into wrap:
feed(wrap, <x, messages>)
= let <y, message> = wrap(x)
in <y, messages message>
= let <y, message> = <x, "">
in <y, messages message>
= <x, messages "">
= <x, messages>
That does not change the pair.
Third: if you define a function that takes x and feeds g(x) into f:
h(x) := feed(f, g(x))
and feed a pair into it:
feed(h, <x, messages>)
= let <y, message> = h(x)
in <y, messages message>
= let <y, message> = feed(f, g(x))
in <y, messages message>
= let <y, message> = feed(f, <x, "called g. ">)
in <y, messages message>
= let <y, message> = let <z, msg> = f(x)
in <z, "called g. " msg>
in <y, messages message>
= let <y, message> = let <z, msg> = <x, "called f. ">
in <z, "called g. " msg>
in <y, messages message>
= let <y, message> = <x, "called g. " "called f. ">
in <y, messages message>
= <x, messages "called g. " "called f. ">
= feed(f, <x, messages "called g. ">)
= feed(f, feed(g, <x, messages>))
That is the same as feeding the pair into g and feeding the resulting pair into f.
You have most of a monad. Now you just need to know about the data types in your program.
What type of value is <x, "called f. ">? Well, that depends on what type of value x is. If x is of type t, then your pair is a value of type "pair of t and string". Call that type M t.
M is a type constructor: M alone does not refer to a type, but M _ refers to a type once you fill in the blank with a type. An M int is a pair of an int and a string. An M string is a pair of a string and a string. Etc.
Congratulations, you have created a monad!
Formally, your monad is the tuple <M, feed, wrap>.
A monad is a tuple <M, feed, wrap> where:
M is a type constructor.
feed takes a (function that takes a t and returns an M u) and an M t and returns an M u.
wrap takes a v and returns an M v.
t, u, and v are any three types that may or may not be the same. A monad satisfies the three properties you proved for your specific monad:
Feeding a wrapped t into a function is the same as passing the unwrapped t into the function.
Formally: feed(f, wrap(x)) = f(x)
Feeding an M t into wrap does nothing to the M t.
Formally: feed(wrap, m) = m
Feeding an M t (call it m) into a function that
passes the t into g
gets an M u (call it n) from g
feeds n into f
is the same as
feeding m into g
getting n from g
feeding n into f
Formally: feed(h, m) = feed(f, feed(g, m)) where h(x) := feed(f, g(x))
Typically, feed is called bind (AKA >>= in Haskell) and wrap is called return.
I will try to explain Monad in the context of Haskell.
In functional programming, function composition is important. It allows our program to consist of small, easy-to-read functions.
Let's say we have two functions: g :: Int -> String and f :: String -> Bool.
We can do (f . g) x, which is just the same as f (g x), where x is an Int value.
When doing composition/applying the result of one function to another, having the types match up is important. In the above case, the type of the result returned by g must be the same as the type accepted by f.
But sometimes values are in contexts, and this makes it a bit less easy to line up types. (Having values in contexts is very useful. For example, the Maybe Int type represents an Int value that may not be there, the IO String type represents a String value that is there as a result of performing some side effects.)
Let's say we now have g1 :: Int -> Maybe String and f1 :: String -> Maybe Bool. g1 and f1 are very similar to g and f respectively.
We can't do (f1 . g1) x or f1 (g1 x), where x is an Int value. The type of the result returned by g1 is not what f1 expects.
We could compose f and g with the . operator, but now we can't compose f1 and g1 with .. The problem is that we can't straightforwardly pass a value in a context to a function that expects a value that is not in a context.
Wouldn't it be nice if we introduce an operator to compose g1 and f1, such that we can write (f1 OPERATOR g1) x? g1 returns a value in a context. The value will be taken out of context and applied to f1. And yes, we have such an operator. It's <=<.
We also have the >>= operator that does for us the exact same thing, though in a slightly different syntax.
We write: g1 x >>= f1. g1 x is a Maybe Int value. The >>= operator helps take that Int value out of the "perhaps-not-there" context, and apply it to f1. The result of f1, which is a Maybe Bool, will be the result of the entire >>= operation.
And finally, why is Monad useful? Because Monad is the type class that defines the >>= operator, very much the same as the Eq type class that defines the == and /= operators.
To conclude, the Monad type class defines the >>= operator that allows us to pass values in a context (we call these monadic values) to functions that don't expect values in a context. The context will be taken care of.
If there is one thing to remember here, it is that Monads allow function composition that involves values in contexts.
A Monad is an Applicative (i.e. something that you can lift binary -- hence, "n-ary" -- functions to,(1) and inject pure values into(2)) Functor (i.e. something that you can map over,(3) i.e. lift unary functions to(3)) with the added ability to flatten the nested datatype (with each of the three notions following its corresponding set of laws). In Haskell, this flattening operation is called join.
The general (generic, parametric) type of this "join" operation is:
join :: Monad m => m (m a) -> m a
for any monad m (NB all ms in the type are the same!).
A specific m monad defines its specific version of join working for any value type a "carried" by the monadic values of type m a. Some specific types are:
join :: [[a]] -> [a] -- for lists, or nondeterministic values
join :: Maybe (Maybe a) -> Maybe a -- for Maybe, or optional values
join :: IO (IO a) -> IO a -- for I/O-produced values
The join operation converts an m-computation producing an m-computation of a-type values into one combined m-computation of a-type values. This allows for combination of computation steps into one larger computation.
This computation steps-combining "bind" (>>=) operator simply uses fmap and join together, i.e.
(ma >>= k) == join (fmap k ma)
{-
ma :: m a -- `m`-computation which produces `a`-type values
k :: a -> m b -- create new `m`-computation from an `a`-type value
fmap k ma :: m ( m b ) -- `m`-computation of `m`-computation of `b`-type values
(m >>= k) :: m b -- `m`-computation which produces `b`-type values
-}
Conversely, join can be defined via bind, join mma == join (fmap id mma) == mma >>= id where id ma = ma -- whichever is more convenient for a given type m.
For monads, both the do-notation and its equivalent bind-using code,
do { x <- mx ; y <- my ; return (f x y) } -- x :: a , mx :: m a
-- y :: b , my :: m b
mx >>= (\x -> -- nested
my >>= (\y -> -- lambda
return (f x y) )) -- functions
can be read as
first "do" mx, and when it's done, get its "result" as x and let me use it to "do" something else.
In a given do block, each of the values to the right of the binding arrow <- is of type m a for some type a and the same monad m throughout the do block.
return x is a neutral m-computation which just produces the pure value x it is given, such that binding any m-computation with return does not change that computation at all.
(1) with liftA2 :: Applicative m => (a -> b -> c) -> m a -> m b -> m c
(2) with pure :: Applicative m => a -> m a
(3) with fmap :: Functor m => (a -> b) -> m a -> m b
There's also the equivalent Monad methods,
liftM2 :: Monad m => (a -> b -> c) -> m a -> m b -> m c
return :: Monad m => a -> m a
liftM :: Monad m => (a -> b) -> m a -> m b
Given a monad, the other definitions could be made as
pure a = return a
fmap f ma = do { a <- ma ; return (f a) }
liftA2 f ma mb = do { a <- ma ; b <- mb ; return (f a b) }
(ma >>= k) = do { a <- ma ; b <- k a ; return b }
If I've understood correctly, IEnumerable is derived from monads. I wonder if that might be an interesting angle of approach for those of us from the C# world?
For what it's worth, here are some links to tutorials that helped me (and no, I still haven't understood what monads are).
http://osteele.com/archives/2007/12/overloading-semicolon
http://spbhug.folding-maps.org/wiki/MonadsEn
http://www.loria.fr/~kow/monads/
What the world needs is another monad blog post, but I think this is useful in identifying existing monads in the wild.
monads are fractals
The above is a fractal called Sierpinski triangle, the only fractal I can remember to draw. Fractals are self-similar structure like the above triangle, in which the parts are similar to the whole (in this case exactly half the scale as parent triangle).
Monads are fractals. Given a monadic data structure, its values can be composed to form another value of the data structure. This is why it's useful to programming, and this is why it occurrs in many situations.
http://code.google.com/p/monad-tutorial/ is a work in progress to address exactly this question.
A monad is a container, but for data. A special container.
All containers can have openings and handles and spouts, but these containers are all guaranteed to have certain openings and handles and spouts.
Why? Because these guaranteed openings and handles and spouts are useful for picking up and linking together the containers in specific, common ways.
This allows you to pick up different containers without having to know much about them. It also allows different kinds of containers to link together easily.
Having briefly looked at Haskell recently, what would be a brief, succinct, practical explanation as to what a monad essentially is?
I have found most explanations I've come across to be fairly inaccessible and lacking in practical detail.
First: The term monad is a bit vacuous if you are not a mathematician. An alternative term is computation builder which is a bit more descriptive of what they are actually useful for.
They are a pattern for chaining operations. It looks a bit like method chaining in object-oriented languages, but the mechanism is slightly different.
The pattern is mostly used in functional languages (especially Haskell which uses monads pervasively) but can be used in any language which support higher-order functions (that is, functions which can take other functions as arguments).
Arrays in JavaScript support the pattern, so let’s use that as the first example.
The gist of the pattern is we have a type (Array in this case) which has a method which takes a function as argument. The operation supplied must return an instance of the same type (i.e. return an Array).
First an example of method chaining which does not use the monad pattern:
[1,2,3].map(x => x + 1)
The result is [2,3,4]. The code does not conform to the monad pattern, since the function we are supplying as an argument returns a number, not an Array. The same logic in monad form would be:
[1,2,3].flatMap(x => [x + 1])
Here we supply an operation which returns an Array, so now it conforms to the pattern. The flatMap method executes the provided function for every element in the array. It expects an array as result for each invocation (rather than single values), but merges the resulting set of arrays into a single array. So the end result is the same, the array [2,3,4].
(The function argument provided to a method like map or flatMap is often called a "callback" in JavaScript. I will call it the "operation" since it is more general.)
If we chain multiple operations (in the traditional way):
[1,2,3].map(a => a + 1).filter(b => b != 3)
Results in the array [2,4]
The same chaining in monad form:
[1,2,3].flatMap(a => [a + 1]).flatMap(b => b != 3 ? [b] : [])
Yields the same result, the array [2,4].
You will immediately notice that the monad form is quite a bit uglier than the non-monad! This just goes to show that monads are not necessarily “good”. They are a pattern which is sometimes beneficial and sometimes not.
Do note that the monad pattern can be combined in a different way:
[1,2,3].flatMap(a => [a + 1].flatMap(b => b != 3 ? [b] : []))
Here the binding is nested rather than chained, but the result is the same. This is an important property of monads as we will see later. It means two operations combined can be treated the same as a single operation.
The operation is allowed to return an array with different element types, for example transforming an array of numbers into an array of strings or something else; as long as it still an Array.
This can be described a bit more formally using Typescript notation. An array has the type Array<T>, where T is the type of the elements in the array. The method flatMap() takes a function argument of the type T => Array<U> and returns an Array<U>.
Generalized, a monad is any type Foo<Bar> which has a "bind" method which takes a function argument of type Bar => Foo<Baz> and returns a Foo<Baz>.
This answers what monads are. The rest of this answer will try to explain through examples why monads can be a useful pattern in a language like Haskell which has good support for them.
Haskell and Do-notation
To translate the map/filter example directly to Haskell, we replace flatMap with the >>= operator:
[1,2,3] >>= \a -> [a+1] >>= \b -> if b == 3 then [] else [b]
The >>= operator is the bind function in Haskell. It does the same as flatMap in JavaScript when the operand is a list, but it is overloaded with different meaning for other types.
But Haskell also has a dedicated syntax for monad expressions, the do-block, which hides the bind operator altogether:
do
a <- [1,2,3]
b <- [a+1]
if b == 3 then [] else [b]
This hides the "plumbing" and lets you focus on the actual operations applied at each step.
In a do-block, each line is an operation. The constraint still holds that all operations in the block must return the same type. Since the first expression is a list, the other operations must also return a list.
The back-arrow <- looks deceptively like an assignment, but note that this is the parameter passed in the bind. So, when the expression on the right side is a List of Integers, the variable on the left side will be a single Integer – but will be executed for each integer in the list.
Example: Safe navigation (the Maybe type)
Enough about lists, lets see how the monad pattern can be useful for other types.
Some functions may not always return a valid value. In Haskell this is represented by the Maybe-type, which is an option that is either Just value or Nothing.
Chaining operations which always return a valid value is of course straightforward:
streetName = getStreetName (getAddress (getUser 17))
But what if any of the functions could return Nothing? We need to check each result individually and only pass the value to the next function if it is not Nothing:
case getUser 17 of
Nothing -> Nothing
Just user ->
case getAddress user of
Nothing -> Nothing
Just address ->
getStreetName address
Quite a lot of repetitive checks! Imagine if the chain was longer. Haskell solves this with the monad pattern for Maybe:
do
user <- getUser 17
addr <- getAddress user
getStreetName addr
This do-block invokes the bind-function for the Maybe type (since the result of the first expression is a Maybe). The bind-function only executes the following operation if the value is Just value, otherwise it just passes the Nothing along.
Here the monad-pattern is used to avoid repetitive code. This is similar to how some other languages use macros to simplify syntax, although macros achieve the same goal in a very different way.
Note that it is the combination of the monad pattern and the monad-friendly syntax in Haskell which result in the cleaner code. In a language like JavaScript without any special syntax support for monads, I doubt the monad pattern would be able to simplify the code in this case.
Mutable state
Haskell does not support mutable state. All variables are constants and all values immutable. But the State type can be used to emulate programming with mutable state:
add2 :: State Integer Integer
add2 = do
-- add 1 to state
x <- get
put (x + 1)
-- increment in another way
modify (+1)
-- return state
get
evalState add2 7
=> 9
The add2 function builds a monad chain which is then evaluated with 7 as the initial state.
Obviously this is something which only makes sense in Haskell. Other languages support mutable state out of the box. Haskell is generally "opt-in" on language features - you enable mutable state when you need it, and the type system ensures the effect is explicit. IO is another example of this.
IO
The IO type is used for chaining and executing “impure” functions.
Like any other practical language, Haskell has a bunch of built-in functions which interface with the outside world: putStrLine, readLine and so on. These functions are called “impure” because they either cause side effects or have non-deterministic results. Even something simple like getting the time is considered impure because the result is non-deterministic – calling it twice with the same arguments may return different values.
A pure function is deterministic – its result depends purely on the arguments passed and it has no side effects on the environment beside returning a value.
Haskell heavily encourages the use of pure functions – this is a major selling point of the language. Unfortunately for purists, you need some impure functions to do anything useful. The Haskell compromise is to cleanly separate pure and impure, and guarantee that there is no way that pure functions can execute impure functions, directly or indirect.
This is guaranteed by giving all impure functions the IO type. The entry point in Haskell program is the main function which have the IO type, so we can execute impure functions at the top level.
But how does the language prevent pure functions from executing impure functions? This is due to the lazy nature of Haskell. A function is only executed if its output is consumed by some other function. But there is no way to consume an IO value except to assign it to main. So if a function wants to execute an impure function, it has to be connected to main and have the IO type.
Using monad chaining for IO operations also ensures that they are executed in a linear and predictable order, just like statements in an imperative language.
This brings us to the first program most people will write in Haskell:
main :: IO ()
main = do
putStrLn ”Hello World”
The do keyword is superfluous when there is only a single operation and therefore nothing to bind, but I keep it anyway for consistency.
The () type means “void”. This special return type is only useful for IO functions called for their side effect.
A longer example:
main = do
putStrLn "What is your name?"
name <- getLine
putStrLn ("hello" ++ name)
This builds a chain of IO operations, and since they are assigned to the main function, they get executed.
Comparing IO with Maybe shows the versatility of the monad pattern. For Maybe, the pattern is used to avoid repetitive code by moving conditional logic to the binding function. For IO, the pattern is used to ensure that all operations of the IO type are sequenced and that IO operations cannot "leak" to pure functions.
Summing up
In my subjective opinion, the monad pattern is only really worthwhile in a language which has some built-in support for the pattern. Otherwise it just leads to overly convoluted code. But Haskell (and some other languages) have some built-in support which hides the tedious parts, and then the pattern can be used for a variety of useful things. Like:
Avoiding repetitive code (Maybe)
Adding language features like mutable state or exceptions for delimited areas of the program.
Isolating icky stuff from nice stuff (IO)
Embedded domain-specific languages (Parser)
Adding GOTO to the language.
Explaining "what is a monad" is a bit like saying "what is a number?" We use numbers all the time. But imagine you met someone who didn't know anything about numbers. How the heck would you explain what numbers are? And how would you even begin to describe why that might be useful?
What is a monad? The short answer: It's a specific way of chaining operations together.
In essence, you're writing execution steps and linking them together with the "bind function". (In Haskell, it's named >>=.) You can write the calls to the bind operator yourself, or you can use syntax sugar which makes the compiler insert those function calls for you. But either way, each step is separated by a call to this bind function.
So the bind function is like a semicolon; it separates the steps in a process. The bind function's job is to take the output from the previous step, and feed it into the next step.
That doesn't sound too hard, right? But there is more than one kind of monad. Why? How?
Well, the bind function can just take the result from one step, and feed it to the next step. But if that's "all" the monad does... that actually isn't very useful. And that's important to understand: Every useful monad does something else in addition to just being a monad. Every useful monad has a "special power", which makes it unique.
(A monad that does nothing special is called the "identity monad". Rather like the identity function, this sounds like an utterly pointless thing, yet turns out not to be... But that's another story™.)
Basically, each monad has its own implementation of the bind function. And you can write a bind function such that it does hoopy things between execution steps. For example:
If each step returns a success/failure indicator, you can have bind execute the next step only if the previous one succeeded. In this way, a failing step aborts the whole sequence "automatically", without any conditional testing from you. (The Failure Monad.)
Extending this idea, you can implement "exceptions". (The Error Monad or Exception Monad.) Because you're defining them yourself rather than it being a language feature, you can define how they work. (E.g., maybe you want to ignore the first two exceptions and only abort when a third exception is thrown.)
You can make each step return multiple results, and have the bind function loop over them, feeding each one into the next step for you. In this way, you don't have to keep writing loops all over the place when dealing with multiple results. The bind function "automatically" does all that for you. (The List Monad.)
As well as passing a "result" from one step to another, you can have the bind function pass extra data around as well. This data now doesn't show up in your source code, but you can still access it from anywhere, without having to manually pass it to every function. (The Reader Monad.)
You can make it so that the "extra data" can be replaced. This allows you to simulate destructive updates, without actually doing destructive updates. (The State Monad and its cousin the Writer Monad.)
Because you're only simulating destructive updates, you can trivially do things that would be impossible with real destructive updates. For example, you can undo the last update, or revert to an older version.
You can make a monad where calculations can be paused, so you can pause your program, go in and tinker with internal state data, and then resume it.
You can implement "continuations" as a monad. This allows you to break people's minds!
All of this and more is possible with monads. Of course, all of this is also perfectly possible without monads too. It's just drastically easier using monads.
Actually, contrary to common understanding of Monads, they have nothing to do with state. Monads are simply a way to wrapping things and provide methods to do operations on the wrapped stuff without unwrapping it.
For example, you can create a type to wrap another one, in Haskell:
data Wrapped a = Wrap a
To wrap stuff we define
return :: a -> Wrapped a
return x = Wrap x
To perform operations without unwrapping, say you have a function f :: a -> b, then you can do this to lift that function to act on wrapped values:
fmap :: (a -> b) -> (Wrapped a -> Wrapped b)
fmap f (Wrap x) = Wrap (f x)
That's about all there is to understand. However, it turns out that there is a more general function to do this lifting, which is bind:
bind :: (a -> Wrapped b) -> (Wrapped a -> Wrapped b)
bind f (Wrap x) = f x
bind can do a bit more than fmap, but not vice versa. Actually, fmap can be defined only in terms of bind and return. So, when defining a monad.. you give its type (here it was Wrapped a) and then say how its return and bind operations work.
The cool thing is that this turns out to be such a general pattern that it pops up all over the place, encapsulating state in a pure way is only one of them.
For a good article on how monads can be used to introduce functional dependencies and thus control order of evaluation, like it is used in Haskell's IO monad, check out IO Inside.
As for understanding monads, don't worry too much about it. Read about them what you find interesting and don't worry if you don't understand right away. Then just diving in a language like Haskell is the way to go. Monads are one of these things where understanding trickles into your brain by practice, one day you just suddenly realize you understand them.
But, You could have invented Monads!
sigfpe says:
But all of these introduce monads as something esoteric in need of explanation. But what I want to argue is that they aren't esoteric at all. In fact, faced with various problems in functional programming you would have been led, inexorably, to certain solutions, all of which are examples of monads. In fact, I hope to get you to invent them now if you haven't already. It's then a small step to notice that all of these solutions are in fact the same solution in disguise. And after reading this, you might be in a better position to understand other documents on monads because you'll recognise everything you see as something you've already invented.
Many of the problems that monads try to solve are related to the issue of side effects. So we'll start with them. (Note that monads let you do more than handle side-effects, in particular many types of container object can be viewed as monads. Some of the introductions to monads find it hard to reconcile these two different uses of monads and concentrate on just one or the other.)
In an imperative programming language such as C++, functions behave nothing like the functions of mathematics. For example, suppose we have a C++ function that takes a single floating point argument and returns a floating point result. Superficially it might seem a little like a mathematical function mapping reals to reals, but a C++ function can do more than just return a number that depends on its arguments. It can read and write the values of global variables as well as writing output to the screen and receiving input from the user. In a pure functional language, however, a function can only read what is supplied to it in its arguments and the only way it can have an effect on the world is through the values it returns.
A monad is a datatype that has two operations: >>= (aka bind) and return (aka unit). return takes an arbitrary value and creates an instance of the monad with it. >>= takes an instance of the monad and maps a function over it. (You can see already that a monad is a strange kind of datatype, since in most programming languages you couldn't write a function that takes an arbitrary value and creates a type from it. Monads use a kind of parametric polymorphism.)
In Haskell notation, the monad interface is written
class Monad m where
return :: a -> m a
(>>=) :: forall a b . m a -> (a -> m b) -> m b
These operations are supposed to obey certain "laws", but that's not terrifically important: the "laws" just codify the way sensible implementations of the operations ought to behave (basically, that >>= and return ought to agree about how values get transformed into monad instances and that >>= is associative).
Monads are not just about state and I/O: they abstract a common pattern of computation that includes working with state, I/O, exceptions, and non-determinism. Probably the simplest monads to understand are lists and option types:
instance Monad [ ] where
[] >>= k = []
(x:xs) >>= k = k x ++ (xs >>= k)
return x = [x]
instance Monad Maybe where
Just x >>= k = k x
Nothing >>= k = Nothing
return x = Just x
where [] and : are the list constructors, ++ is the concatenation operator, and Just and Nothing are the Maybe constructors. Both of these monads encapsulate common and useful patterns of computation on their respective data types (note that neither has anything to do with side effects or I/O).
You really have to play around writing some non-trivial Haskell code to appreciate what monads are about and why they are useful.
You should first understand what a functor is. Before that, understand higher-order functions.
A higher-order function is simply a function that takes a function as an argument.
A functor is any type construction T for which there exists a higher-order function, call it map, that transforms a function of type a -> b (given any two types a and b) into a function T a -> T b. This map function must also obey the laws of identity and composition such that the following expressions return true for all p and q (Haskell notation):
map id = id
map (p . q) = map p . map q
For example, a type constructor called List is a functor if it comes equipped with a function of type (a -> b) -> List a -> List b which obeys the laws above. The only practical implementation is obvious. The resulting List a -> List b function iterates over the given list, calling the (a -> b) function for each element, and returns the list of the results.
A monad is essentially just a functor T with two extra methods, join, of type T (T a) -> T a, and unit (sometimes called return, fork, or pure) of type a -> T a. For lists in Haskell:
join :: [[a]] -> [a]
pure :: a -> [a]
Why is that useful? Because you could, for example, map over a list with a function that returns a list. Join takes the resulting list of lists and concatenates them. List is a monad because this is possible.
You can write a function that does map, then join. This function is called bind, or flatMap, or (>>=), or (=<<). This is normally how a monad instance is given in Haskell.
A monad has to satisfy certain laws, namely that join must be associative. This means that if you have a value x of type [[[a]]] then join (join x) should equal join (map join x). And pure must be an identity for join such that join (pure x) == x.
[Disclaimer: I am still trying to fully grok monads. The following is just what I have understood so far. If it’s wrong, hopefully someone knowledgeable will call me on the carpet.]
Arnar wrote:
Monads are simply a way to wrapping things and provide methods to do operations on the wrapped stuff without unwrapping it.
That’s precisely it. The idea goes like this:
You take some kind of value and wrap it with some additional information. Just like the value is of a certain kind (eg. an integer or a string), so the additional information is of a certain kind.
E.g., that extra information might be a Maybe or an IO.
Then you have some operators that allow you to operate on the wrapped data while carrying along that additional information. These operators use the additional information to decide how to change the behaviour of the operation on the wrapped value.
E.g., a Maybe Int can be a Just Int or Nothing. Now, if you add a Maybe Int to a Maybe Int, the operator will check to see if they are both Just Ints inside, and if so, will unwrap the Ints, pass them the addition operator, re-wrap the resulting Int into a new Just Int (which is a valid Maybe Int), and thus return a Maybe Int. But if one of them was a Nothing inside, this operator will just immediately return Nothing, which again is a valid Maybe Int. That way, you can pretend that your Maybe Ints are just normal numbers and perform regular math on them. If you were to get a Nothing, your equations will still produce the right result – without you having to litter checks for Nothing everywhere.
But the example is just what happens for Maybe. If the extra information was an IO, then that special operator defined for IOs would be called instead, and it could do something totally different before performing the addition. (OK, adding two IO Ints together is probably nonsensical – I’m not sure yet.) (Also, if you paid attention to the Maybe example, you have noticed that “wrapping a value with extra stuff” is not always correct. But it’s hard to be exact, correct and precise without being inscrutable.)
Basically, “monad” roughly means “pattern”. But instead of a book full of informally explained and specifically named Patterns, you now have a language construct – syntax and all – that allows you to declare new patterns as things in your program. (The imprecision here is all the patterns have to follow a particular form, so a monad is not quite as generic as a pattern. But I think that’s the closest term that most people know and understand.)
And that is why people find monads so confusing: because they are such a generic concept. To ask what makes something a monad is similarly vague as to ask what makes something a pattern.
But think of the implications of having syntactic support in the language for the idea of a pattern: instead of having to read the Gang of Four book and memorise the construction of a particular pattern, you just write code that implements this pattern in an agnostic, generic way once and then you are done! You can then reuse this pattern, like Visitor or Strategy or Façade or whatever, just by decorating the operations in your code with it, without having to re-implement it over and over!
So that is why people who understand monads find them so useful: it’s not some ivory tower concept that intellectual snobs pride themselves on understanding (OK, that too of course, teehee), but actually makes code simpler.
After much striving, I think I finally understand the monad. After rereading my own lengthy critique of the overwhelmingly top voted answer, I will offer this explanation.
There are three questions that need to be answered to understand monads:
Why do you need a monad?
What is a monad?
How is a monad implemented?
As I noted in my original comments, too many monad explanations get caught up in question number 3, without, and before really adequately covering question 2, or question 1.
Why do you need a monad?
Pure functional languages like Haskell are different from imperative languages like C, or Java in that, a pure functional program is not necessarily executed in a specific order, one step at a time. A Haskell program is more akin to a mathematical function, in which you may solve the "equation" in any number of potential orders. This confers a number of benefits, among which is that it eliminates the possibility of certain kinds of bugs, particularly those relating to things like "state".
However, there are certain problems that are not so straightforward to solve with this style of programming. Some things, like console programming, and file i/o, need things to happen in a particular order, or need to maintain state. One way to deal with this problem is to create a kind of object that represents the state of a computation, and a series of functions that take a state object as input, and return a new modified state object.
So let's create a hypothetical "state" value, that represents the state of a console screen. exactly how this value is constructed is not important, but let's say it's an array of byte length ascii characters that represents what is currently visible on the screen, and an array that represents the last line of input entered by the user, in pseudocode. We've defined some functions that take console state, modify it, and return a new console state.
consolestate MyConsole = new consolestate;
So to do console programming, but in a pure functional manner, you would need to nest a lot of function calls inside eachother.
consolestate FinalConsole = print(input(print(myconsole, "Hello, what's your name?")),"hello, %inputbuffer%!");
Programming in this way keeps the "pure" functional style, while forcing changes to the console to happen in a particular order. But, we'll probably want to do more than just a few operations at a time like in the above example. Nesting functions in that way will start to become ungainly. What we want, is code that does essentially the same thing as above, but is written a bit more like this:
consolestate FinalConsole = myconsole:
print("Hello, what's your name?"):
input():
print("hello, %inputbuffer%!");
This would indeed be a more convenient way to write it. How do we do that though?
What is a monad?
Once you have a type (such as consolestate) that you define along with a bunch of functions designed specifically to operate on that type, you can turn the whole package of these things into a "monad" by defining an operator like : (bind) that automatically feeds return values on its left, into function parameters on its right, and a lift operator that turns normal functions, into functions that work with that specific kind of bind operator.
How is a monad implemented?
See other answers, that seem quite free to jump into the details of that.
After giving an answer to this question a few years ago, I believe I can improve and simplify that response with...
A monad is a function composition technique that externalizes treatment for some input scenarios using a composing function, bind, to pre-process input during composition.
In normal composition, the function, compose (>>), is use to apply the composed function to the result of its predecessor in sequence. Importantly, the function being composed is required to handle all scenarios of its input.
(x -> y) >> (y -> z)
This design can be improved by restructuring the input so that relevant states are more easily interrogated. So, instead of simply y the value can become Mb such as, for instance, (is_OK, b) if y included a notion of validity.
For example, when the input is only possibly a number, instead of returning a string which can dutifully contain a number or not, you could restructure the type into a bool indicating the presence of a valid number and a number in tuple such as, bool * float. The composed functions would now no longer need to parse an input string to determine whether a number exists but could merely inspect the bool portion of a tuple.
(Ma -> Mb) >> (Mb -> Mc)
Here, again, composition occurs naturally with compose and so each function must handle all scenarios of its input individually, though doing so is now much easier.
However, what if we could externalize the effort of interrogation for those times where handling a scenario is routine. For example, what if our program does nothing when the input is not OK as in when is_OK is false. If that were done then composed functions would not need to handle that scenario themselves, dramatically simplifying their code and effecting another level of reuse.
To achieve this externalization we could use a function, bind (>>=), to perform the composition instead of compose. As such, instead of simply transferring values from the output of one function to the input of another Bind would inspect the M portion of Ma and decide whether and how to apply the composed function to the a. Of course, the function bind would be defined specifically for our particular M so as to be able to inspect its structure and perform whatever type of application we want. Nonetheless, the a can be anything since bind merely passes the a uninspected to the the composed function when it determines application necessary. Additionally, the composed functions themselves no longer need to deal with the M portion of the input structure either, simplifying them. Hence...
(a -> Mb) >>= (b -> Mc) or more succinctly Mb >>= (b -> Mc)
In short, a monad externalizes and thereby provides standard behaviour around the treatment of certain input scenarios once the input becomes designed to sufficiently expose them. This design is a shell and content model where the shell contains data relevant to the application of the composed function and is interrogated by and remains only available to the bind function.
Therefore, a monad is three things:
an M shell for holding monad relevant information,
a bind function implemented to make use of this shell information in its application of the composed functions to the content value(s) it finds within the shell, and
composable functions of the form, a -> Mb, producing results that include monadic management data.
Generally speaking, the input to a function is far more restrictive than its output which may include such things as error conditions; hence, the Mb result structure is generally very useful. For instance, the division operator does not return a number when the divisor is 0.
Additionally, monads may include wrap functions that wrap values, a, into the monadic type, Ma, and general functions, a -> b, into monadic functions, a -> Mb, by wrapping their results after application. Of course, like bind, such wrap functions are specific to M. An example:
let return a = [a]
let lift f a = return (f a)
The design of the bind function presumes immutable data structures and pure functions others things get complex and guarantees cannot be made. As such, there are monadic laws:
Given...
M_
return = (a -> Ma)
f = (a -> Mb)
g = (b -> Mc)
Then...
Left Identity : (return a) >>= f === f a
Right Identity : Ma >>= return === Ma
Associative : Ma >>= (f >>= g) === Ma >>= ((fun x -> f x) >>= g)
Associativity means that bind preserves the order of evaluation regardless of when bind is applied. That is, in the definition of Associativity above, the force early evaluation of the parenthesized binding of f and g will only result in a function that expects Ma in order to complete the bind. Hence the evaluation of Ma must be determined before its value can become applied to f and that result in turn applied to g.
A monad is, effectively, a form of "type operator". It will do three things. First it will "wrap" (or otherwise convert) a value of one type into another type (typically called a "monadic type"). Secondly it will make all the operations (or functions) available on the underlying type available on the monadic type. Finally it will provide support for combining its self with another monad to produce a composite monad.
The "maybe monad" is essentially the equivalent of "nullable types" in Visual Basic / C#. It takes a non nullable type "T" and converts it into a "Nullable<T>", and then defines what all the binary operators mean on a Nullable<T>.
Side effects are represented simillarly. A structure is created that holds descriptions of side effects alongside a function's return value. The "lifted" operations then copy around side effects as values are passed between functions.
They are called "monads" rather than the easier-to-grasp name of "type operators" for several reasons:
Monads have restrictions on what they can do (see the definiton for details).
Those restrictions, along with the fact that there are three operations involved, conform to the structure of something called a monad in Category Theory, which is an obscure branch of mathematics.
They were designed by proponents of "pure" functional languages
Proponents of pure functional languages like obscure branches of mathematics
Because the math is obscure, and monads are associated with particular styles of programming, people tend to use the word monad as a sort of secret handshake. Because of this no one has bothered to invest in a better name.
(See also the answers at What is a monad?)
A good motivation to Monads is sigfpe (Dan Piponi)'s You Could Have Invented Monads! (And Maybe You Already Have). There are a LOT of other monad tutorials, many of which misguidedly try to explain monads in "simple terms" using various analogies: this is the monad tutorial fallacy; avoid them.
As DR MacIver says in Tell us why your language sucks:
So, things I hate about Haskell:
Let’s start with the obvious. Monad tutorials. No, not monads. Specifically the tutorials. They’re endless, overblown and dear god are they tedious. Further, I’ve never seen any convincing evidence that they actually help. Read the class definition, write some code, get over the scary name.
You say you understand the Maybe monad? Good, you're on your way. Just start using other monads and sooner or later you'll understand what monads are in general.
[If you are mathematically oriented, you might want to ignore the dozens of tutorials and learn the definition, or follow lectures in category theory :)
The main part of the definition is that a Monad M involves a "type constructor" that defines for each existing type "T" a new type "M T", and some ways for going back and forth between "regular" types and "M" types.]
Also, surprisingly enough, one of the best introductions to monads is actually one of the early academic papers introducing monads, Philip Wadler's Monads for functional programming. It actually has practical, non-trivial motivating examples, unlike many of the artificial tutorials out there.
Monads are to control flow what abstract data types are to data.
In other words, many developers are comfortable with the idea of Sets, Lists, Dictionaries (or Hashes, or Maps), and Trees. Within those data types there are many special cases (for instance InsertionOrderPreservingIdentityHashMap).
However, when confronted with program "flow" many developers haven't been exposed to many more constructs than if, switch/case, do, while, goto (grr), and (maybe) closures.
So, a monad is simply a control flow construct. A better phrase to replace monad would be 'control type'.
As such, a monad has slots for control logic, or statements, or functions - the equivalent in data structures would be to say that some data structures allow you to add data, and remove it.
For example, the "if" monad:
if( clause ) then block
at its simplest has two slots - a clause, and a block. The if monad is usually built to evaluate the result of the clause, and if not false, evaluate the block. Many developers are not introduced to monads when they learn 'if', and it just isn't necessary to understand monads to write effective logic.
Monads can become more complicated, in the same way that data structures can become more complicated, but there are many broad categories of monad that may have similar semantics, but differing implementations and syntax.
Of course, in the same way that data structures may be iterated over, or traversed, monads may be evaluated.
Compilers may or may not have support for user-defined monads. Haskell certainly does. Ioke has some similar capabilities, although the term monad is not used in the language.
My favorite Monad tutorial:
http://www.haskell.org/haskellwiki/All_About_Monads
(out of 170,000 hits on a Google search for "monad tutorial"!)
#Stu: The point of monads is to allow you to add (usually) sequential semantics to otherwise pure code; you can even compose monads (using Monad Transformers) and get more interesting and complicated combined semantics, like parsing with error handling, shared state, and logging, for example. All of this is possible in pure code, monads just allow you to abstract it away and reuse it in modular libraries (always good in programming), as well as providing convenient syntax to make it look imperative.
Haskell already has operator overloading[1]: it uses type classes much the way one might use interfaces in Java or C# but Haskell just happens to also allow non-alphanumeric tokens like + && and > as infix identifiers. It's only operator overloading in your way of looking at it if you mean "overloading the semicolon" [2]. It sounds like black magic and asking for trouble to "overload the semicolon" (picture enterprising Perl hackers getting wind of this idea) but the point is that without monads there is no semicolon, since purely functional code does not require or allow explicit sequencing.
This all sounds much more complicated than it needs to. sigfpe's article is pretty cool but uses Haskell to explain it, which sort of fails to break the chicken and egg problem of understanding Haskell to grok Monads and understanding Monads to grok Haskell.
[1] This is a separate issue from monads but monads use Haskell's operator overloading feature.
[2] This is also an oversimplification since the operator for chaining monadic actions is >>= (pronounced "bind") but there is syntactic sugar ("do") that lets you use braces and semicolons and/or indentation and newlines.
I am still new to monads, but I thought I would share a link I found that felt really good to read (WITH PICTURES!!):
http://www.matusiak.eu/numerodix/blog/2012/3/11/monads-for-the-layman/
(no affiliation)
Basically, the warm and fuzzy concept that I got from the article was the concept that monads are basically adapters that allow disparate functions to work in a composable fashion, i.e. be able to string up multiple functions and mix and match them without worrying about inconsistent return types and such. So the BIND function is in charge of keeping apples with apples and oranges with oranges when we're trying to make these adapters. And the LIFT function is in charge of taking "lower level" functions and "upgrading" them to work with BIND functions and be composable as well.
I hope I got it right, and more importantly, hope that the article has a valid view on monads. If nothing else, this article helped whet my appetite for learning more about monads.
I've been thinking of Monads in a different way, lately. I've been thinking of them as abstracting out execution order in a mathematical way, which makes new kinds of polymorphism possible.
If you're using an imperative language, and you write some expressions in order, the code ALWAYS runs exactly in that order.
And in the simple case, when you use a monad, it feels the same -- you define a list of expressions that happen in order. Except that, depending on which monad you use, your code might run in order (like in IO monad), in parallel over several items at once (like in the List monad), it might halt partway through (like in the Maybe monad), it might pause partway through to be resumed later (like in a Resumption monad), it might rewind and start from the beginning (like in a Transaction monad), or it might rewind partway to try other options (like in a Logic monad).
And because monads are polymorphic, it's possible to run the same code in different monads, depending on your needs.
Plus, in some cases, it's possible to combine monads together (with monad transformers) to get multiple features at the same time.
tl;dr
{-# LANGUAGE InstanceSigs #-}
newtype Id t = Id t
instance Monad Id where
return :: t -> Id t
return = Id
(=<<) :: (a -> Id b) -> Id a -> Id b
f =<< (Id x) = f x
Prologue
The application operator $ of functions
forall a b. a -> b
is canonically defined
($) :: (a -> b) -> a -> b
f $ x = f x
infixr 0 $
in terms of Haskell-primitive function application f x (infixl 10).
Composition . is defined in terms of $ as
(.) :: (b -> c) -> (a -> b) -> (a -> c)
f . g = \ x -> f $ g x
infixr 9 .
and satisfies the equivalences forall f g h.
f . id = f :: c -> d Right identity
id . g = g :: b -> c Left identity
(f . g) . h = f . (g . h) :: a -> d Associativity
. is associative, and id is its right and left identity.
The Kleisli triple
In programming, a monad is a functor type constructor with an instance of the monad type class. There are several equivalent variants of definition and implementation, each carrying slightly different intuitions about the monad abstraction.
A functor is a type constructor f of kind * -> * with an instance of the functor type class.
{-# LANGUAGE KindSignatures #-}
class Functor (f :: * -> *) where
map :: (a -> b) -> (f a -> f b)
In addition to following statically enforced type protocol, instances of the functor type class must obey the algebraic functor laws forall f g.
map id = id :: f t -> f t Identity
map f . map g = map (f . g) :: f a -> f c Composition / short cut fusion
Functor computations have the type
forall f t. Functor f => f t
A computation c r consists in results r within context c.
Unary monadic functions or Kleisli arrows have the type
forall m a b. Functor m => a -> m b
Kleisi arrows are functions that take one argument a and return a monadic computation m b.
Monads are canonically defined in terms of the Kleisli triple forall m. Functor m =>
(m, return, (=<<))
implemented as the type class
class Functor m => Monad m where
return :: t -> m t
(=<<) :: (a -> m b) -> m a -> m b
infixr 1 =<<
The Kleisli identity return is a Kleisli arrow that promotes a value t into monadic context m. Extension or Kleisli application =<< applies a Kleisli arrow a -> m b to results of a computation m a.
Kleisli composition <=< is defined in terms of extension as
(<=<) :: Monad m => (b -> m c) -> (a -> m b) -> (a -> m c)
f <=< g = \ x -> f =<< g x
infixr 1 <=<
<=< composes two Kleisli arrows, applying the left arrow to results of the right arrow’s application.
Instances of the monad type class must obey the monad laws, most elegantly stated in terms of Kleisli composition: forall f g h.
f <=< return = f :: c -> m d Right identity
return <=< g = g :: b -> m c Left identity
(f <=< g) <=< h = f <=< (g <=< h) :: a -> m d Associativity
<=< is associative, and return is its right and left identity.
Identity
The identity type
type Id t = t
is the identity function on types
Id :: * -> *
Interpreted as a functor,
return :: t -> Id t
= id :: t -> t
(=<<) :: (a -> Id b) -> Id a -> Id b
= ($) :: (a -> b) -> a -> b
(<=<) :: (b -> Id c) -> (a -> Id b) -> (a -> Id c)
= (.) :: (b -> c) -> (a -> b) -> (a -> c)
In canonical Haskell, the identity monad is defined
newtype Id t = Id t
instance Functor Id where
map :: (a -> b) -> Id a -> Id b
map f (Id x) = Id (f x)
instance Monad Id where
return :: t -> Id t
return = Id
(=<<) :: (a -> Id b) -> Id a -> Id b
f =<< (Id x) = f x
Option
An option type
data Maybe t = Nothing | Just t
encodes computation Maybe t that not necessarily yields a result t, computation that may “fail”. The option monad is defined
instance Functor Maybe where
map :: (a -> b) -> (Maybe a -> Maybe b)
map f (Just x) = Just (f x)
map _ Nothing = Nothing
instance Monad Maybe where
return :: t -> Maybe t
return = Just
(=<<) :: (a -> Maybe b) -> Maybe a -> Maybe b
f =<< (Just x) = f x
_ =<< Nothing = Nothing
a -> Maybe b is applied to a result only if Maybe a yields a result.
newtype Nat = Nat Int
The natural numbers can be encoded as those integers greater than or equal to zero.
toNat :: Int -> Maybe Nat
toNat i | i >= 0 = Just (Nat i)
| otherwise = Nothing
The natural numbers are not closed under subtraction.
(-?) :: Nat -> Nat -> Maybe Nat
(Nat n) -? (Nat m) = toNat (n - m)
infixl 6 -?
The option monad covers a basic form of exception handling.
(-? 20) <=< toNat :: Int -> Maybe Nat
List
The list monad, over the list type
data [] t = [] | t : [t]
infixr 5 :
and its additive monoid operation “append”
(++) :: [t] -> [t] -> [t]
(x : xs) ++ ys = x : xs ++ ys
[] ++ ys = ys
infixr 5 ++
encodes nonlinear computation [t] yielding a natural amount 0, 1, ... of results t.
instance Functor [] where
map :: (a -> b) -> ([a] -> [b])
map f (x : xs) = f x : map f xs
map _ [] = []
instance Monad [] where
return :: t -> [t]
return = (: [])
(=<<) :: (a -> [b]) -> [a] -> [b]
f =<< (x : xs) = f x ++ (f =<< xs)
_ =<< [] = []
Extension =<< concatenates ++ all lists [b] resulting from applications f x of a Kleisli arrow a -> [b] to elements of [a] into a single result list [b].
Let the proper divisors of a positive integer n be
divisors :: Integral t => t -> [t]
divisors n = filter (`divides` n) [2 .. n - 1]
divides :: Integral t => t -> t -> Bool
(`divides` n) = (== 0) . (n `rem`)
then
forall n. let { f = f <=< divisors } in f n = []
In defining the monad type class, instead of extension =<<, the Haskell standard uses its flip, the bind operator >>=.
class Applicative m => Monad m where
(>>=) :: forall a b. m a -> (a -> m b) -> m b
(>>) :: forall a b. m a -> m b -> m b
m >> k = m >>= \ _ -> k
{-# INLINE (>>) #-}
return :: a -> m a
return = pure
For simplicity's sake, this explanation uses the type class hierarchy
class Functor f
class Functor m => Monad m
In Haskell, the current standard hierarchy is
class Functor f
class Functor p => Applicative p
class Applicative m => Monad m
because not only is every monad a functor, but every applicative is a functor and every monad is an applicative, too.
Using the list monad, the imperative pseudocode
for a in (1, ..., 10)
for b in (1, ..., 10)
p <- a * b
if even(p)
yield p
roughly translates to the do block,
do a <- [1 .. 10]
b <- [1 .. 10]
let p = a * b
guard (even p)
return p
the equivalent monad comprehension,
[ p | a <- [1 .. 10], b <- [1 .. 10], let p = a * b, even p ]
and the expression
[1 .. 10] >>= (\ a ->
[1 .. 10] >>= (\ b ->
let p = a * b in
guard (even p) >> -- [ () | even p ] >>
return p
)
)
Do notation and monad comprehensions are syntactic sugar for nested bind expressions. The bind operator is used for local name binding of monadic results.
let x = v in e = (\ x -> e) $ v = v & (\ x -> e)
do { r <- m; c } = (\ r -> c) =<< m = m >>= (\ r -> c)
where
(&) :: a -> (a -> b) -> b
(&) = flip ($)
infixl 0 &
The guard function is defined
guard :: Additive m => Bool -> m ()
guard True = return ()
guard False = fail
where the unit type or “empty tuple”
data () = ()
Additive monads that support choice and failure can be abstracted over using a type class
class Monad m => Additive m where
fail :: m t
(<|>) :: m t -> m t -> m t
infixl 3 <|>
instance Additive Maybe where
fail = Nothing
Nothing <|> m = m
m <|> _ = m
instance Additive [] where
fail = []
(<|>) = (++)
where fail and <|> form a monoid forall k l m.
k <|> fail = k
fail <|> l = l
(k <|> l) <|> m = k <|> (l <|> m)
and fail is the absorbing/annihilating zero element of additive monads
_ =<< fail = fail
If in
guard (even p) >> return p
even p is true, then the guard produces [()], and, by the definition of >>, the local constant function
\ _ -> return p
is applied to the result (). If false, then the guard produces the list monad’s fail ( [] ), which yields no result for a Kleisli arrow to be applied >> to, so this p is skipped over.
State
Infamously, monads are used to encode stateful computation.
A state processor is a function
forall st t. st -> (t, st)
that transitions a state st and yields a result t. The state st can be anything. Nothing, flag, count, array, handle, machine, world.
The type of state processors is usually called
type State st t = st -> (t, st)
The state processor monad is the kinded * -> * functor State st. Kleisli arrows of the state processor monad are functions
forall st a b. a -> (State st) b
In canonical Haskell, the lazy version of the state processor monad is defined
newtype State st t = State { stateProc :: st -> (t, st) }
instance Functor (State st) where
map :: (a -> b) -> ((State st) a -> (State st) b)
map f (State p) = State $ \ s0 -> let (x, s1) = p s0
in (f x, s1)
instance Monad (State st) where
return :: t -> (State st) t
return x = State $ \ s -> (x, s)
(=<<) :: (a -> (State st) b) -> (State st) a -> (State st) b
f =<< (State p) = State $ \ s0 -> let (x, s1) = p s0
in stateProc (f x) s1
A state processor is run by supplying an initial state:
run :: State st t -> st -> (t, st)
run = stateProc
eval :: State st t -> st -> t
eval = fst . run
exec :: State st t -> st -> st
exec = snd . run
State access is provided by primitives get and put, methods of abstraction over stateful monads:
{-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies #-}
class Monad m => Stateful m st | m -> st where
get :: m st
put :: st -> m ()
m -> st declares a functional dependency of the state type st on the monad m; that a State t, for example, will determine the state type to be t uniquely.
instance Stateful (State st) st where
get :: State st st
get = State $ \ s -> (s, s)
put :: st -> State st ()
put s = State $ \ _ -> ((), s)
with the unit type used analogously to void in C.
modify :: Stateful m st => (st -> st) -> m ()
modify f = do
s <- get
put (f s)
gets :: Stateful m st => (st -> t) -> m t
gets f = do
s <- get
return (f s)
gets is often used with record field accessors.
The state monad equivalent of the variable threading
let s0 = 34
s1 = (+ 1) s0
n = (* 12) s1
s2 = (+ 7) s1
in (show n, s2)
where s0 :: Int, is the equally referentially transparent, but infinitely more elegant and practical
(flip run) 34
(do
modify (+ 1)
n <- gets (* 12)
modify (+ 7)
return (show n)
)
modify (+ 1) is a computation of type State Int (), except for its effect equivalent to return ().
(flip run) 34
(modify (+ 1) >>
gets (* 12) >>= (\ n ->
modify (+ 7) >>
return (show n)
)
)
The monad law of associativity can be written in terms of >>= forall m f g.
(m >>= f) >>= g = m >>= (\ x -> f x >>= g)
or
do { do { do {
r1 <- do { x <- m; r0 <- m;
r0 <- m; = do { = r1 <- f r0;
f r0 r1 <- f x; g r1
}; g r1 }
g r1 }
} }
Like in expression-oriented programming (e.g. Rust), the last statement of a block represents its yield. The bind operator is sometimes called a “programmable semicolon”.
Iteration control structure primitives from structured imperative programming are emulated monadically
for :: Monad m => (a -> m b) -> [a] -> m ()
for f = foldr ((>>) . f) (return ())
while :: Monad m => m Bool -> m t -> m ()
while c m = do
b <- c
if b then m >> while c m
else return ()
forever :: Monad m => m t
forever m = m >> forever m
Input/Output
data World
The I/O world state processor monad is a reconciliation of pure Haskell and the real world, of functional denotative and imperative operational semantics. A close analogue of the actual strict implementation:
type IO t = World -> (t, World)
Interaction is facilitated by impure primitives
getChar :: IO Char
putChar :: Char -> IO ()
readFile :: FilePath -> IO String
writeFile :: FilePath -> String -> IO ()
hSetBuffering :: Handle -> BufferMode -> IO ()
hTell :: Handle -> IO Integer
. . . . . .
The impurity of code that uses IO primitives is permanently protocolized by the type system. Because purity is awesome, what happens in IO, stays in IO.
unsafePerformIO :: IO t -> t
Or, at least, should.
The type signature of a Haskell program
main :: IO ()
main = putStrLn "Hello, World!"
expands to
World -> ((), World)
A function that transforms a world.
Epilogue
The category whiches objects are Haskell types and whiches morphisms are functions between Haskell types is, “fast and loose”, the category Hask.
A functor T is a mapping from a category C to a category D; for each object in C an object in D
Tobj : Obj(C) -> Obj(D)
f :: * -> *
and for each morphism in C a morphism in D
Tmor : HomC(X, Y) -> HomD(Tobj(X), Tobj(Y))
map :: (a -> b) -> (f a -> f b)
where X, Y are objects in C. HomC(X, Y) is the homomorphism class of all morphisms X -> Y in C. The functor must preserve morphism identity and composition, the “structure” of C, in D.
Tmor Tobj
T(id) = id : T(X) -> T(X) Identity
T(f) . T(g) = T(f . g) : T(X) -> T(Z) Composition
The Kleisli category of a category C is given by a Kleisli triple
<T, eta, _*>
of an endofunctor
T : C -> C
(f), an identity morphism eta (return), and an extension operator * (=<<).
Each Kleisli morphism in Hask
f : X -> T(Y)
f :: a -> m b
by the extension operator
(_)* : Hom(X, T(Y)) -> Hom(T(X), T(Y))
(=<<) :: (a -> m b) -> (m a -> m b)
is given a morphism in Hask’s Kleisli category
f* : T(X) -> T(Y)
(f =<<) :: m a -> m b
Composition in the Kleisli category .T is given in terms of extension
f .T g = f* . g : X -> T(Z)
f <=< g = (f =<<) . g :: a -> m c
and satisfies the category axioms
eta .T g = g : Y -> T(Z) Left identity
return <=< g = g :: b -> m c
f .T eta = f : Z -> T(U) Right identity
f <=< return = f :: c -> m d
(f .T g) .T h = f .T (g .T h) : X -> T(U) Associativity
(f <=< g) <=< h = f <=< (g <=< h) :: a -> m d
which, applying the equivalence transformations
eta .T g = g
eta* . g = g By definition of .T
eta* . g = id . g forall f. id . f = f
eta* = id forall f g h. f . h = g . h ==> f = g
(f .T g) .T h = f .T (g .T h)
(f* . g)* . h = f* . (g* . h) By definition of .T
(f* . g)* . h = f* . g* . h . is associative
(f* . g)* = f* . g* forall f g h. f . h = g . h ==> f = g
in terms of extension are canonically given
eta* = id : T(X) -> T(X) Left identity
(return =<<) = id :: m t -> m t
f* . eta = f : Z -> T(U) Right identity
(f =<<) . return = f :: c -> m d
(f* . g)* = f* . g* : T(X) -> T(Z) Associativity
(((f =<<) . g) =<<) = (f =<<) . (g =<<) :: m a -> m c
Monads can also be defined in terms not of Kleislian extension, but a natural transformation mu, in programming called join. A monad is defined in terms of mu as a triple over a category C, of an endofunctor
T : C -> C
f :: * -> *
and two natural tranformations
eta : Id -> T
return :: t -> f t
mu : T . T -> T
join :: f (f t) -> f t
satisfying the equivalences
mu . T(mu) = mu . mu : T . T . T -> T . T Associativity
join . map join = join . join :: f (f (f t)) -> f t
mu . T(eta) = mu . eta = id : T -> T Identity
join . map return = join . return = id :: f t -> f t
The monad type class is then defined
class Functor m => Monad m where
return :: t -> m t
join :: m (m t) -> m t
The canonical mu implementation of the option monad:
instance Monad Maybe where
return = Just
join (Just m) = m
join Nothing = Nothing
The concat function
concat :: [[a]] -> [a]
concat (x : xs) = x ++ concat xs
concat [] = []
is the join of the list monad.
instance Monad [] where
return :: t -> [t]
return = (: [])
(=<<) :: (a -> [b]) -> ([a] -> [b])
(f =<<) = concat . map f
Implementations of join can be translated from extension form using the equivalence
mu = id* : T . T -> T
join = (id =<<) :: m (m t) -> m t
The reverse translation from mu to extension form is given by
f* = mu . T(f) : T(X) -> T(Y)
(f =<<) = join . map f :: m a -> m b
Philip Wadler: Monads for functional programming
Simon L Peyton Jones, Philip Wadler: Imperative functional programming
Jonathan M. D. Hill, Keith Clarke: An introduction to category theory, category theory monads, and their relationship to functional programming
´
Kleisli category
Eugenio Moggi: Notions of computation and monads
What a monad is not
But why should a theory so abstract be of any use for programming?
The answer is simple: as computer scientists, we value abstraction! When we design the interface to a software component, we want it to reveal as little as possible about the implementation. We want to be able to replace the implementation with many alternatives, many other ‘instances’ of the same ‘concept’. When we design a generic interface to many program libraries, it is even more important that the interface we choose have a variety of implementations. It is the generality of the monad concept which we value so highly, it is because category theory is so abstract that its concepts are so useful for programming.
It is hardly suprising, then, that the generalisation of monads that we present below also has a close connection to category theory. But we stress that our purpose is very practical: it is not to ‘implement category theory’, it is to find a more general way to structure combinator libraries. It is simply our good fortune that mathematicians have already done much of the work for us!
from Generalising Monads to Arrows by John Hughes
Monads Are Not Metaphors, but a practically useful abstraction emerging from a common pattern, as Daniel Spiewak explains.
In addition to the excellent answers above, let me offer you a link to the following article (by Patrick Thomson) which explains monads by relating the concept to the JavaScript library jQuery (and its way of using "method chaining" to manipulate the DOM):
jQuery is a Monad
The jQuery documentation itself doesn't refer to the term "monad" but talks about the "builder pattern" which is probably more familiar. This doesn't change the fact that you have a proper monad there maybe without even realizing it.
A monad is a way of combining computations together that share a common context. It is like building a network of pipes. When constructing the network, there is no data flowing through it. But when I have finished piecing all the bits together with 'bind' and 'return' then I invoke something like runMyMonad monad data and the data flows through the pipes.
In practice, monad is a custom implementation of function composition operator that takes care of side effects and incompatible input and return values (for chaining).
The two things that helped me best when learning about there were:
Chapter 8, "Functional Parsers," from Graham Hutton's book Programming in Haskell. This doesn't mention monads at all, actually, but if you can work through chapter and really understand everything in it, particularly how a sequence of bind operations is evaluated, you'll understand the internals of monads. Expect this to take several tries.
The tutorial All About Monads. This gives several good examples of their use, and I have to say that the analogy in Appendex I worked for me.
Monoid appears to be something that ensures that all operations defined on a Monoid and a supported type will always return a supported type inside the Monoid. Eg, Any number + Any number = A number, no errors.
Whereas division accepts two fractionals, and returns a fractional, which defined division by zero as Infinity in haskell somewhy(which happens to be a fractional somewhy)...
In any case, it appears Monads are just a way to ensure that your chain of operations behaves in a predictable way, and a function that claims to be Num -> Num, composed with another function of Num->Num called with x does not say, fire the missiles.
On the other hand, if we have a function which does fire the missiles, we can compose it with other functions which also fire the missiles, because our intent is clear -- we want to fire the missiles -- but it won't try printing "Hello World" for some odd reason.
In Haskell, main is of type IO (), or IO [()], the distiction is strange and I will not discuss it but here's what I think happens:
If I have main, I want it to do a chain of actions, the reason I run the program is to produce an effect -- usually though IO. Thus I can chain IO operations together in main in order to -- do IO, nothing else.
If I try to do something which does not "return IO", the program will complain that the chain does not flow, or basically "How does this relate to what we are trying to do -- an IO action", it appears to force the programmer to keep their train of thought, without straying off and thinking about firing the missiles, while creating algorithms for sorting -- which does not flow.
Basically, Monads appear to be a tip to the compiler that "hey, you know this function that returns a number here, it doesn't actually always work, it can sometimes produce a Number, and sometimes Nothing at all, just keep this in mind". Knowing this, if you try to assert a monadic action, the monadic action may act as a compile time exception saying "hey, this isn't actually a number, this CAN be a number, but you can't assume this, do something to ensure that the flow is acceptable." which prevents unpredictable program behavior -- to a fair extent.
It appears monads are not about purity, nor control, but about maintaining an identity of a category on which all behavior is predictable and defined, or does not compile. You cannot do nothing when you are expected to do something, and you cannot do something if you are expected to do nothing (visible).
The biggest reason I could think of for Monads is -- go look at Procedural/OOP code, and you will notice that you do not know where the program starts, nor ends, all you see is a lot of jumping and a lot of math,magic,and missiles. You will not be able to maintain it, and if you can, you will spend quite a lot of time wrapping your mind around the whole program before you can understand any part of it, because modularity in this context is based on interdependant "sections" of code, where code is optimized to be as related as possible for promise of efficiency/inter-relation. Monads are very concrete, and well defined by definition, and ensure that the flow of program is possible to analyze, and isolate parts which are hard to analyze -- as they themselves are monads. A monad appears to be a "comprehensible unit which is predictable upon its full understanding" -- If you understand "Maybe" monad, there's no possible way it will do anything except be "Maybe", which appears trivial, but in most non monadic code, a simple function "helloworld" can fire the missiles, do nothing, or destroy the universe or even distort time -- we have no idea nor have any guarantees that IT IS WHAT IT IS. A monad GUARANTEES that IT IS WHAT IT IS. which is very powerful.
All things in "real world" appear to be monads, in the sense that it is bound by definite observable laws preventing confusion. This does not mean we have to mimic all the operations of this object to create classes, instead we can simply say "a square is a square", nothing but a square, not even a rectangle nor a circle, and "a square has area of the length of one of it's existing dimensions multiplied by itself. No matter what square you have, if it's a square in 2D space, it's area absolutely cannot be anything but its length squared, it's almost trivial to prove. This is very powerful because we do not need to make assertions to make sure that our world is the way it is, we just use implications of reality to prevent our programs from falling off track.
Im pretty much guaranteed to be wrong but I think this could help somebody out there, so hopefully it helps somebody.
In the context of Scala you will find the following to be the simplest definition. Basically flatMap (or bind) is 'associative' and there exists an identity.
trait M[+A] {
def flatMap[B](f: A => M[B]): M[B] // AKA bind
// Pseudo Meta Code
def isValidMonad: Boolean = {
// for every parameter the following holds
def isAssociativeOn[X, Y, Z](x: M[X], f: X => M[Y], g: Y => M[Z]): Boolean =
x.flatMap(f).flatMap(g) == x.flatMap(f(_).flatMap(g))
// for every parameter X and x, there exists an id
// such that the following holds
def isAnIdentity[X](x: M[X], id: X => M[X]): Boolean =
x.flatMap(id) == x
}
}
E.g.
// These could be any functions
val f: Int => Option[String] = number => if (number == 7) Some("hello") else None
val g: String => Option[Double] = string => Some(3.14)
// Observe these are identical. Since Option is a Monad
// they will always be identical no matter what the functions are
scala> Some(7).flatMap(f).flatMap(g)
res211: Option[Double] = Some(3.14)
scala> Some(7).flatMap(f(_).flatMap(g))
res212: Option[Double] = Some(3.14)
// As Option is a Monad, there exists an identity:
val id: Int => Option[Int] = x => Some(x)
// Observe these are identical
scala> Some(7).flatMap(id)
res213: Option[Int] = Some(7)
scala> Some(7)
res214: Some[Int] = Some(7)
NOTE Strictly speaking the definition of a Monad in functional programming is not the same as the definition of a Monad in Category Theory, which is defined in turns of map and flatten. Though they are kind of equivalent under certain mappings. This presentations is very good: http://www.slideshare.net/samthemonad/monad-presentation-scala-as-a-category
This answer begins with a motivating example, works through the example, derives an example of a monad, and formally defines "monad".
Consider these three functions in pseudocode:
f(<x, messages>) := <x, messages "called f. ">
g(<x, messages>) := <x, messages "called g. ">
wrap(x) := <x, "">
f takes an ordered pair of the form <x, messages> and returns an ordered pair. It leaves the first item untouched and appends "called f. " to the second item. Same with g.
You can compose these functions and get your original value, along with a string that shows which order the functions were called in:
f(g(wrap(x)))
= f(g(<x, "">))
= f(<x, "called g. ">)
= <x, "called g. called f. ">
You dislike the fact that f and g are responsible for appending their own log messages to the previous logging information. (Just imagine for the sake of argument that instead of appending strings, f and g must perform complicated logic on the second item of the pair. It would be a pain to repeat that complicated logic in two -- or more -- different functions.)
You prefer to write simpler functions:
f(x) := <x, "called f. ">
g(x) := <x, "called g. ">
wrap(x) := <x, "">
But look at what happens when you compose them:
f(g(wrap(x)))
= f(g(<x, "">))
= f(<<x, "">, "called g. ">)
= <<<x, "">, "called g. ">, "called f. ">
The problem is that passing a pair into a function does not give you what you want. But what if you could feed a pair into a function:
feed(f, feed(g, wrap(x)))
= feed(f, feed(g, <x, "">))
= feed(f, <x, "called g. ">)
= <x, "called g. called f. ">
Read feed(f, m) as "feed m into f". To feed a pair <x, messages> into a function f is to pass x into f, get <y, message> out of f, and return <y, messages message>.
feed(f, <x, messages>) := let <y, message> = f(x)
in <y, messages message>
Notice what happens when you do three things with your functions:
First: if you wrap a value and then feed the resulting pair into a function:
feed(f, wrap(x))
= feed(f, <x, "">)
= let <y, message> = f(x)
in <y, "" message>
= let <y, message> = <x, "called f. ">
in <y, "" message>
= <x, "" "called f. ">
= <x, "called f. ">
= f(x)
That is the same as passing the value into the function.
Second: if you feed a pair into wrap:
feed(wrap, <x, messages>)
= let <y, message> = wrap(x)
in <y, messages message>
= let <y, message> = <x, "">
in <y, messages message>
= <x, messages "">
= <x, messages>
That does not change the pair.
Third: if you define a function that takes x and feeds g(x) into f:
h(x) := feed(f, g(x))
and feed a pair into it:
feed(h, <x, messages>)
= let <y, message> = h(x)
in <y, messages message>
= let <y, message> = feed(f, g(x))
in <y, messages message>
= let <y, message> = feed(f, <x, "called g. ">)
in <y, messages message>
= let <y, message> = let <z, msg> = f(x)
in <z, "called g. " msg>
in <y, messages message>
= let <y, message> = let <z, msg> = <x, "called f. ">
in <z, "called g. " msg>
in <y, messages message>
= let <y, message> = <x, "called g. " "called f. ">
in <y, messages message>
= <x, messages "called g. " "called f. ">
= feed(f, <x, messages "called g. ">)
= feed(f, feed(g, <x, messages>))
That is the same as feeding the pair into g and feeding the resulting pair into f.
You have most of a monad. Now you just need to know about the data types in your program.
What type of value is <x, "called f. ">? Well, that depends on what type of value x is. If x is of type t, then your pair is a value of type "pair of t and string". Call that type M t.
M is a type constructor: M alone does not refer to a type, but M _ refers to a type once you fill in the blank with a type. An M int is a pair of an int and a string. An M string is a pair of a string and a string. Etc.
Congratulations, you have created a monad!
Formally, your monad is the tuple <M, feed, wrap>.
A monad is a tuple <M, feed, wrap> where:
M is a type constructor.
feed takes a (function that takes a t and returns an M u) and an M t and returns an M u.
wrap takes a v and returns an M v.
t, u, and v are any three types that may or may not be the same. A monad satisfies the three properties you proved for your specific monad:
Feeding a wrapped t into a function is the same as passing the unwrapped t into the function.
Formally: feed(f, wrap(x)) = f(x)
Feeding an M t into wrap does nothing to the M t.
Formally: feed(wrap, m) = m
Feeding an M t (call it m) into a function that
passes the t into g
gets an M u (call it n) from g
feeds n into f
is the same as
feeding m into g
getting n from g
feeding n into f
Formally: feed(h, m) = feed(f, feed(g, m)) where h(x) := feed(f, g(x))
Typically, feed is called bind (AKA >>= in Haskell) and wrap is called return.
I will try to explain Monad in the context of Haskell.
In functional programming, function composition is important. It allows our program to consist of small, easy-to-read functions.
Let's say we have two functions: g :: Int -> String and f :: String -> Bool.
We can do (f . g) x, which is just the same as f (g x), where x is an Int value.
When doing composition/applying the result of one function to another, having the types match up is important. In the above case, the type of the result returned by g must be the same as the type accepted by f.
But sometimes values are in contexts, and this makes it a bit less easy to line up types. (Having values in contexts is very useful. For example, the Maybe Int type represents an Int value that may not be there, the IO String type represents a String value that is there as a result of performing some side effects.)
Let's say we now have g1 :: Int -> Maybe String and f1 :: String -> Maybe Bool. g1 and f1 are very similar to g and f respectively.
We can't do (f1 . g1) x or f1 (g1 x), where x is an Int value. The type of the result returned by g1 is not what f1 expects.
We could compose f and g with the . operator, but now we can't compose f1 and g1 with .. The problem is that we can't straightforwardly pass a value in a context to a function that expects a value that is not in a context.
Wouldn't it be nice if we introduce an operator to compose g1 and f1, such that we can write (f1 OPERATOR g1) x? g1 returns a value in a context. The value will be taken out of context and applied to f1. And yes, we have such an operator. It's <=<.
We also have the >>= operator that does for us the exact same thing, though in a slightly different syntax.
We write: g1 x >>= f1. g1 x is a Maybe Int value. The >>= operator helps take that Int value out of the "perhaps-not-there" context, and apply it to f1. The result of f1, which is a Maybe Bool, will be the result of the entire >>= operation.
And finally, why is Monad useful? Because Monad is the type class that defines the >>= operator, very much the same as the Eq type class that defines the == and /= operators.
To conclude, the Monad type class defines the >>= operator that allows us to pass values in a context (we call these monadic values) to functions that don't expect values in a context. The context will be taken care of.
If there is one thing to remember here, it is that Monads allow function composition that involves values in contexts.
A Monad is an Applicative (i.e. something that you can lift binary -- hence, "n-ary" -- functions to,(1) and inject pure values into(2)) Functor (i.e. something that you can map over,(3) i.e. lift unary functions to(3)) with the added ability to flatten the nested datatype (with each of the three notions following its corresponding set of laws). In Haskell, this flattening operation is called join.
The general (generic, parametric) type of this "join" operation is:
join :: Monad m => m (m a) -> m a
for any monad m (NB all ms in the type are the same!).
A specific m monad defines its specific version of join working for any value type a "carried" by the monadic values of type m a. Some specific types are:
join :: [[a]] -> [a] -- for lists, or nondeterministic values
join :: Maybe (Maybe a) -> Maybe a -- for Maybe, or optional values
join :: IO (IO a) -> IO a -- for I/O-produced values
The join operation converts an m-computation producing an m-computation of a-type values into one combined m-computation of a-type values. This allows for combination of computation steps into one larger computation.
This computation steps-combining "bind" (>>=) operator simply uses fmap and join together, i.e.
(ma >>= k) == join (fmap k ma)
{-
ma :: m a -- `m`-computation which produces `a`-type values
k :: a -> m b -- create new `m`-computation from an `a`-type value
fmap k ma :: m ( m b ) -- `m`-computation of `m`-computation of `b`-type values
(m >>= k) :: m b -- `m`-computation which produces `b`-type values
-}
Conversely, join can be defined via bind, join mma == join (fmap id mma) == mma >>= id where id ma = ma -- whichever is more convenient for a given type m.
For monads, both the do-notation and its equivalent bind-using code,
do { x <- mx ; y <- my ; return (f x y) } -- x :: a , mx :: m a
-- y :: b , my :: m b
mx >>= (\x -> -- nested
my >>= (\y -> -- lambda
return (f x y) )) -- functions
can be read as
first "do" mx, and when it's done, get its "result" as x and let me use it to "do" something else.
In a given do block, each of the values to the right of the binding arrow <- is of type m a for some type a and the same monad m throughout the do block.
return x is a neutral m-computation which just produces the pure value x it is given, such that binding any m-computation with return does not change that computation at all.
(1) with liftA2 :: Applicative m => (a -> b -> c) -> m a -> m b -> m c
(2) with pure :: Applicative m => a -> m a
(3) with fmap :: Functor m => (a -> b) -> m a -> m b
There's also the equivalent Monad methods,
liftM2 :: Monad m => (a -> b -> c) -> m a -> m b -> m c
return :: Monad m => a -> m a
liftM :: Monad m => (a -> b) -> m a -> m b
Given a monad, the other definitions could be made as
pure a = return a
fmap f ma = do { a <- ma ; return (f a) }
liftA2 f ma mb = do { a <- ma ; b <- mb ; return (f a b) }
(ma >>= k) = do { a <- ma ; b <- k a ; return b }
If I've understood correctly, IEnumerable is derived from monads. I wonder if that might be an interesting angle of approach for those of us from the C# world?
For what it's worth, here are some links to tutorials that helped me (and no, I still haven't understood what monads are).
http://osteele.com/archives/2007/12/overloading-semicolon
http://spbhug.folding-maps.org/wiki/MonadsEn
http://www.loria.fr/~kow/monads/
What the world needs is another monad blog post, but I think this is useful in identifying existing monads in the wild.
monads are fractals
The above is a fractal called Sierpinski triangle, the only fractal I can remember to draw. Fractals are self-similar structure like the above triangle, in which the parts are similar to the whole (in this case exactly half the scale as parent triangle).
Monads are fractals. Given a monadic data structure, its values can be composed to form another value of the data structure. This is why it's useful to programming, and this is why it occurrs in many situations.
http://code.google.com/p/monad-tutorial/ is a work in progress to address exactly this question.
A monad is a container, but for data. A special container.
All containers can have openings and handles and spouts, but these containers are all guaranteed to have certain openings and handles and spouts.
Why? Because these guaranteed openings and handles and spouts are useful for picking up and linking together the containers in specific, common ways.
This allows you to pick up different containers without having to know much about them. It also allows different kinds of containers to link together easily.