Difference in capability between fmap and bind? - haskell

I'm new to functional programming (coming from javascript), and I'm having a hard time telling the difference between the two, which is also messing with my understand of functors vs. monads.
Functor:
class Functor f where
fmap :: (a -> b) -> f a -> f b
Monad (simplified):
class Monad m where
(>>=) :: m a -> (a -> m b) -> m b
fmap takes a function and a functor, and returns a functor.
>>= takes a function and a monad, and returns a monad.
The difference between the two is in the function parameter:
fmap - (a -> b)
>>= - (a -> m b)
>>= takes a function parameter that returns a monad. I know that this is significant, but I'm having difficulty seeing how this one slight thing makes monads much more powerful than functors. Can someone explain?

Well, (<$>) is an alias for fmap, and (=<<) is the same as (>>=) with the arguments swapped:
(<$>) :: (x -> y) -> b x -> b y
(=<<) :: (x -> b y) -> b x -> b y
The difference is now fairly clear: with the bind function, we apply a function that returns a b y rather than a y. So what difference does that make?
Consider this small example:
foo <$> Just 3
Notice that (<$>) will apply foo to 3, and put the result back into a Just. In other words, the result of this computation cannot be Nothing. On the contrary:
bar =<< Just 3
This computation can return Nothing. (For example, bar x = Nothing will do it.)
We can do a similar thing with the list monad:
foo <$> [Red, Yellow, Blue] -- Result is guaranteed to be a 3-element list.
bar =<< [Red, Yellow, Blue] -- Result can be ANY size.
In short, with (<$>) (i.e., fmap), the "structure" of the result is always identical to the input. But with (=<<) (i.e., (>>=)), the structure of the result can change. This allows conditional execution, reacting to input, and a whole bunch of other things.

Short answer is that if you can turn m (m a) into m a in a way which makes sense then it's a Monad. This is possible for all Monads but not necessarily for Functors.
I think the most confusing thing is that all of the common examples of Functors (e.g. List, Maybe, IO) are also Monads. We need an example of something that is a Functor but not a Monad.
I'll use an example from a hypothetical calendar program. The following code defines an Event Functor which stores some data that goes with the event and the time that it occurs.
import Data.Time.LocalTime
data Event a = MkEvent LocalTime a
instance Functor Event where
fmap f (MkEvent time a) = MkEvent time (f a)
The Event object stores the time that the event occurs and some extra data that can be changed using fmap. Now lets try and make it a Monad:
instance Monad Event where
(>>=) (MkEvent timeA a) f = let (MkEvent timeB b) = f a in
MkEvent <notSureWhatToPutHere> b
We find that we can't because you will end up with two LocalTime objects. timeA from the given Event and timeB from the Event given by the result of f a. Our Event type is defined as having only one LocalTime (time) that it occurs at and so making it a Monad isn't possible without turning two LocalTimes into one. (There may be some case where doing so might make sense and so you could turn this into a monad if you really wanted to).

Assume for a moment that IO were just a Functor, and not a Monad. How could we sequence two actions? Say, like getChar :: IO Char and putChar :: Char -> IO ().
We could try mapping over getChar (an action that, when executed, reads a Char from stdin) using putChar.
fmap putChar getChar :: IO (IO ())
Now we have a program that, when executed, reads a Char from stdin and produces a program that, when executed, writes the Char to stdout. But what we actually want is a program that, when executed, reads a Char from stdin and writes the Char to stdout. So we need a "flattening" (in the IO case, "sequencing") function with type:
join :: IO (IO ()) -> IO ()
Functor by itself does not provide this function. But it is a function of Monad, where it has the more general type:
join :: Monad m => m (m a) -> m a
What does all of this have to do with >>=? As it happens, monadic bind is just a combination of fmap and join:
:t \m f -> join (fmap f m)
(Monad m) => m a1 -> (a1 -> m a) -> m a
Another way of seeing the difference is that fmap never changes the overall structure of the mapped value, but join (and therefore >>= as well) can do that.
In terms of IO actions, fmap will never cause addicional reads/writes or other effects. But join sequences the read/writes of the inner action after those of the outer action.

Related

Does the expressiveness of monads come at the expense of code reuse?

When I compare the binary operations of the Applicative and Monad type class
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
(=<<) :: Monad m => (a -> m b) -> m a -> m b
two differences become apparent:
ap expects a normal, pure function, whereas bind expects a monadic action, which must return a monad of the same type
with ap the sequence of actions is determined by the applicative, whereas with bind the monadic action can determine the control flow
So monads give me additional expressive power. However, since they no longer accept normal, pure functions, this expressiveness seems to come at the expense of code reuse.
My conclusion might be somewhat naive or even false, since I have merely little experience with Haskell and monadic computations. Any light in the dark is appreciated!
If you have pure function g :: a -> b you can make it be either Applicative version by
pure g :: Applicative f => f (a -> b)
or Monadish by
pure . g :: Applicative f => a -> f b
So you don't lose any code reuse in your sense.
Code reuse is only an advantage if you can reuse code to do what you actually want.
GHCi> putStrLn =<< getLine
Sulfur
Sulfur
GHCi>
Here, (=<<) picks the String result produced in an IO context by getLine and feeds it to putStrLn, which then prints said result.
GHCi> :t getLine
getLine :: IO String
GHCi> :t putStrLn
putStrLn :: String -> IO ()
GHCi> :t putStrLn =<< getLine
putStrLn =<< getLine :: IO ()
Now, in the type of fmap/(<$>)...
GHCi> :t (<$>)
(<$>) :: Functor f => (a -> b) -> f a -> f b
... it is perfectly possible for b to be IO (), and therefore nothing stops us from using putStrLn with it. However...
GHCi> putStrLn <$> getLine
Sulfur
GHCi>
... nothing will be printed.
GHCi> :t putStrLn <$> getLine
putStrLn <$> getLine :: IO (IO ())
Executing an IO (IO ()) action won't execute the inner IO () action. To do that, we need the additional power of Monad, either by replacing (<$>) with (=<<) or, equivalently, by using join on the IO (IO ()) value:
GHCi> :t join
join :: Monad m => m (m a) -> m a
GHCi> join (putStrLn <$> getLine)
Sulfur
Sulfur
GHCi>
Like chi, I also have trouble understanding the premises of your question. You seem to expect that one of Functor, Applicative and Monad will turn out to be better than the others. That is not the case. We can do more things with Applicative than with Functor, and even more with Monad. If you need the additional power, use a suitably powerful class. If not, using a less powerful class will lead to simpler, clearer code and a broader range of available instances.
The reason for this question was my too schematic understanding of the relations between the functor, applicative and monad class. I thought it was only about reusing pure functions.
(<*>) essentially says give me an applicative of functions and a bunch of applicatives of values and I will apply them according to my rules.
(=<<) essentially says give me a function (a -> m b) and a monad and I will feed (a -> m b) with the value of the monad and leave it to (a -> m b) to produce and return a transformed value wrapped in the same monad.
Of course, the applicative code will be more concise and better reusable, because the mechanism of how a sequence of actions is executed is defined exclusively within (<*>). However, applicative sequences are also somehow mechanic. So when you want to control the flow of the sequence or the "shape" of the resulting structure, you need the extra power of monads, which leads to more verbose code.
I think this question isn't particularly helpful and if people vote for closing, I would have no problem deleting it.

Why is it bind's argument's responsibility to unit its value?

The typical monad bind function has the following signature:
m a -> (a -> m b) -> m b
As I understand it (and I might well be wrong,) the function (a -> m b) is just a mapping function from one structure a to another b. Assuming that is correct begs the question why bind's signature is not simply:
m a -> (a -> b) -> m b
Given that unit is part of a monad's definition; why give the function (a -> m b) the responsibility to call unit on whatever value b it produced – wouldn't it be more sensible to make it part of bind?
A function like m a -> (a -> b) -> m b would be equivalent to fmap :: (a -> b) -> f a -> f b. All fmap can do is change the values inside the action, it can't perform new actions. With m a -> (a -> m b) -> m b, you can "run" the m a, feed that value into (a -> m b), then return a new effect of m b. Without this, you would only ever be able to have one effect in your program, you couldn't have two print statements sequentially, you couldn't connect to a network and then download a URL, and you couldn't respond to user input, you would only be able to transform the value returned from each primitive operation. It's this operation that allows monads to be more powerful than either functors or applicatives.
Another detail here is that you aren't necessarily just wrapping a value with unit, that m b could represent an action, not just returning something. For example, where is the call to return in the action putStrLn :: String -> m ()? This function's signature is compatible with the second argument to >>=, with a ~ String and b ~ (), but there is not call to return anywhere in its body. The point of >>= is to sequence two actions together, not just to wrap values in a context.
Because m a -> (a -> b) -> m b is just fmap which a Monas has, being functor.
What a Monad add to a functor is the ability to join (or squash) a nested Monad to a simple one.
Example a list of list to simple list , or [[1,2], [3]] to [1,2,3].
If you replace b with m b in the fmap signature you end up with
m a -> (a -> m b) -> m (m b)
With a normal functor, you are stuck with your double layer of container (m (m b)). With a Monad,
using the join function, you can squash the m (m b) to m b. So bind is in fact join.fmap.
In fact, join and fmap can be written using only bind (and return), so in practice, it's easier to only define one function bind, instead of two join and fmap, even though it often simpler to write the laters.
So basically, bind is a mix of fmap and join.
As I understand it (and I might well be wrong,) the function (a -> m b) is just a mapping function from one structure a to another b
You're quite right about this – if you change the word "mapping" to morphism. For functions a -> m b are morphisms of the monad's Kleisli category. In that light, the characteristic feature of monads is that you can compose Kleislis in the same way you can compose functions:
type Kleisli m a b = a -> m b -- `Control.Arrow` has this as a `newtype` with `Category` instance.
-- compare (.) :: (b->c) -> (a->b) -> a->c
(<=<) :: Kleisli m b c -> Kleisli m a b -> Kleisli m a c
(f<=<g) x = f =<< g x
Also, you can use ordinary functions as Kleislis:
(return.) :: (a->b) -> Kleisli m a b
However, Kleislis are strictly more powerful than functions. E.g. for m ≡ IO, they are basically functions which can have side-effects, which as you know ordinary Haskell functions can't. So you can't turn a Kleisli back into a function – and if >>= accepted an a->b rather than a Kleisli m a b, but all you had was a Kleisli, there would be no way to use it.
A function of type a -> m b has potentially many more capabilities than one of type a -> b followed by return (or as you call it, "unit"). In fact no "effectful" operation can be expressed in the latter form.
Another take on this: any useful monad will have a number of operations specific to it, beyond just the ones that come from the monadic interface. For example, the IO monad has getLine :: IO String. Consider this very simple program:
main :: IO ()
main = do name <- prompt "What's your name?"
putStrLn ("Hello " ++ name ++ "!")
prompt :: String -> IO String
prompt str = do putStrLn str
getLine
Note that the type of prompt fits the a -> m b mold, but it doesn't use return (a.k.a. unit) anywhere. This is because it uses getLine :: IO String, an opaque operation provided by the IO monad and which cannot be defined in terms of return and >>=.
Think of it this way: ultimately, Monad is never something you use on its own; it's an interface for plugging together things that are extrinsic to it, like getLine and putStrLn.

On the signature of >>= Monad operator

This is the signature of the well know >>= operator in Haskell
>>= :: Monad m => m a -> (a -> m b) -> m b
The question is why type of the function is
(a -> m b)
instead of
(a -> b)
I would say the latter one is more practical because it allows straightforward integration of existing "pure" functions in the monad being defined.
On the contrary, it seems not difficult to write a general "adapter"
adapt :: (Monad m) => (a -> b) -> (a -> m b)
but anyway I regard more probable that you already have (a -> b) instead of (a -> m b).
Note. I explain what I mean by "pratical" and "probable".
If you haven't defined any monad in a program yet, then, the functions you have are "pure" (a -> b) and you will have 0 functions of the type (a -> m b) just because you have not still defined m. If then you decide to define a monad m it comes the need of having new a -> m b functions defined.
Basically, (>>=) lets you sequence operations in such a way that latter operations can choose to behave differently based on earlier results. A more pure function like you ask for is available in the Functor typeclass and is derivable using (>>=), but if you were stuck with it alone you'd no longer be able to sequence operations at all. There's also an intermediate called Applicative which allows you to sequence operations but not change them based on the intermediate results.
As an example, let's build up a simple IO action type from Functor to Applicative to Monad.
We'll focus on a type GetC which is as follows
GetC a = Pure a | GetC (Char -> GetC a)
The first constructor will make sense in time, but the second one should make sense immediately—GetC holds a function which can respond to an incoming character. We can turn GetC into an IO action in order to provide those characters
io :: GetC a -> IO a
io (Pure a) = return a
io (GetC go) = getChar >>= (\char -> io (go char))
Which makes it clear where Pure comes from---it handles pure values in our type. Finally, we're going to make GetC abstract: we're going to disallow using Pure or GetC directly and allow our users access only to functions we define. I'll write the most important one now
getc :: GetC Char
getc = GetC Pure
The function which gets a character then immediately considers is a pure value. While I called it the most important function, it's clear that right now GetC is pretty useless. All we can possibly do is run getc followed by io... to get an effect totally equivalent to getChar!
io getc === getChar :: IO Char
But we'll build up from here.
As stated at the beginning, the Functor typeclass provides a function exactly like you're looking for called fmap.
class Functor f where
fmap :: (a -> b) -> f a -> f b
It turns out that we can instantiate GetC as a Functor so let's do that.
instance Functor GetC where
fmap f (Pure a) = Pure (f a)
fmap f (GetC go) = GetC (\char -> fmap f (go char))
If you squint, you'll notice that fmap affects the Pure constructor only. In the GetC constructor it just gets "pushed down" and deferred until later. This is a hint as to the weakness of fmap, but let's try it.
io getc :: IO Char
io (fmap ord getc) :: IO Int
io (fmap (\c -> ord + 1) getc) :: IO Int
We've gotten the ability to modify the return type of our IO interpretation of our type, but that's about it! In particular, we're still limited to getting a single character and then running back to IO to do anything interesting with it.
This is the weakness of Functor. Since, as you noted, it deals only with pure functions it gets stuck "at the end of a computation" modifying the Pure constructor only.
The next step is Applicative which extends Functor like this
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
In other words it extends the notion of injecting pure values into our context and allowing pure function application to cross over the data type. Unsurprisingly, GetC instantiates Applicative too
instance Applicative GetC where
pure = Pure
Pure f <*> Pure x = Pure (f x)
GetC gof <*> getcx = GetC (\char -> gof <*> getcx)
Pure f <*> GetC gox = GetC (\char -> fmap f (gox char))
Applicative allows us to sequence operations and that might be clear from the definition already. In fact, we can see that (<*>) pushes character application forward so that the GetC actions on either side of (<*>) get performed in order. We use Applicative like this
fmap (,) getc <*> getc :: GetC (Char, Char)
and it allows us to build incredibly interesting functions, much more complex than just Functor would. For instance, we can already form a loop and get an infinite stream of characters.
getAll :: GetC [Char]
getAll = fmap (:) getc <*> getAll
which demonstrates the nature of Applicative being able to sequence actions one after another.
The problem is that we can't stop. io getAll is an infinite loop because it just consumes characters forever. We can't tell it to stop when it sees '\n', for instance, because Applicatives sequence without noticing earlier results.
So let's go the final step an instantiate Monad
instance Monad GetC where
return = pure
Pure a >>= f = f a
GetC go >>= f = GetC (\char -> go char >>= f)
Which allows us immediately to implement a stopping getAll
getLn :: GetC String
getLn = getc >>= \c -> case c of
'\n' -> return []
s -> fmap (s:) getLn
Or, using do notation
getLn :: GetC String
getLn = do
c <- getc
case c of
'\n' -> return []
s -> fmap (s:) getLn
So what gives? Why can we now write a stopping loop?
Because (>>=) :: m a -> (a -> m b) -> m b lets the second argument, a function of the pure value, choose the next action, m b. In this case, if the incoming character is '\n' we choose to return [] and terminate the loop. If not, we choose to recurse.
So that's why you might want a Monad over a Functor. There's much more to the story, but those are the basics.
The reason is that (>>=) is more general. The function you're suggesting is called liftM and can be easily defined as
liftM :: (Monad m) => (a -> b) -> (m a -> m b)
liftM f k = k >>= return . f
This concept has its own type class called Functor with fmap :: (Functor m) => (a -> b) -> (m a -> m b). Every Monad is also a Functor with fmap = liftM, but for historical reasons this isn't (yet) captured in the type-class hierarchy.
And adapt you're suggesting can be defined as
adapt :: (Monad m) => (a -> b) -> (a -> m b)
adapt f = return . f
Notice that having adapt is equivalent to having return as return can be defined as adapt id.
So anything that has >>= can also have these two functions, but not vice versa. There are structures that are Functors but not Monads.
The intuition behind this difference is simple: A computation within a monad can depend on the results of the previous monads. The important piece is (a -> m b) which means that not just b, but also its "effect" m b can depend on a. For example, we can have
import Control.Monad
mIfThenElse :: (Monad m) => m Bool -> m a -> m a -> m a
mIfThenElse p t f = p >>= \x -> if x then t else f
but it's not possible to define this function with just Functor m constraint, using just fmap. Functors only allow us to change the value "inside", but we can't take it "out" to decide what action to take.
As others have said, your bind is the fmap function of the Functor class, a.k.a <$>.
But why is it less powerful than >>=?
it seems not difficult to write a general "adapter"
adapt :: (Monad m) => (a -> b) -> (a -> m b)
You can indeed write a function with this type:
adapt f x = return (f x)
However, this function is not able to do everything that we might want >>='s argument to do. There are useful values that adapt cannot produce.
In the list monad, return x = [x], so adapt will always return a single-element list.
In the Maybe monad, return x = Some x, so adapt will never return None.
In the IO monad, once you retrieved the result of an operation, all you can do is compute a new value from it, you can't run a subsequent operation!
etc. So in short, fmap is able to do fewer things than >>=. That doesn't mean it's useless -- it wouldn't have a name if it was :) But it is less powerful.
The whole 'point' of the monad really (that puts it above functor or applicative) is that you can determine the monad you 'return' based on the values/results of the left hand side.
For example, >>= on a Maybe type allows us to decide to return Just x or Nothing. You'll note that using functors or applicative, it is impossible to "choose" to return Just x or Nothing based on the "sequenced" Maybe.
Try implementing something like:
halve :: Int -> Maybe Int
halve n | even n = Just (n `div` 2)
| otherwise = Nothing
return 24 >>= halve >>= halve >>= halve
with only <$> (fmap1) or <*> (ap).
Actually the "straightforward integration of pure code" that you mention is a significant aspect of the functor design pattern, and is very useful. However, it is in many ways unrelated to the motivation behind >>= --- they are meant for different applications and things.
I had the same question for a while and was thinking why bother with a -> m b once mapping a -> b to m a -> m b looks more natural. This is simialr to asking "why we need a monad given the functor".
The little answer that I give to myself is that a -> m b accounts for side-effects or other complexities that you would not capture with function a -> b.
Even better wording form here (highly recommend):
monadic value M a can itself be seen as a computation. Monadic functions represent computations that are, in some way, non-standard, i.e. not naturally supported by the programming language. This can mean side effects in a pure functional language or asynchronous execution in an impure functional language. An ordinary function type cannot encode such computations and they are, instead, encoded using a datatype that has the monadic structure.
I'd put emphasis on ordinary function type cannot encode such computations, where ordinary is a -> b.
I think that J. Abrahamson's answer points to the right reason:
Basically, (>>=) lets you sequence operations in such a way that latter operations can choose to behave differently based on earlier results. A more pure function like you ask for is available in the Functor typeclass and is derivable using (>>=), but if you were stuck with it alone you'd no longer be able to sequence operations at all.
And let me show a simple counterexample against >>= :: Monad m => m a -> (a -> b) -> m b.
It is clear that we want to have values bound to a context. And perhaps we will need to sequentially chain functions over such "context-ed values". (This is just one use case for Monads).
Take Maybe simply as a case of "context-ed value".
Then define a "fake" monad class:
class Mokad m where
returk :: t -> m t
(>>==) :: m t1 -> (t1 -> t2) -> m t2
Now let's try to have Maybe be an instance of Mokad
instance Mokad Maybe where
returk x = Just x
Nothing >>== f = Nothing
Just x >>== f = Just (f x) -- ????? always Just ?????
The first problem appears: >>== is always returning Just _.
Now let's try to chain functions over Maybe using >>==
(we sequentially extract the values of three Maybes just to add them)
chainK :: Maybe Int -> Maybe Int -> Maybe Int -> Maybe Int
chainK ma mb mc = md
where
md = ma >>== \a -> mb >>== \b -> mc >>== \c -> returk $ a+b+c
But, this code doesn't compile: md type is Maybe (Maybe (Maybe Int)) because every time >>== is used, it encapsulates the previous result into a Maybe box.
And on the contrary >>= works fine:
chainOK :: Maybe Int -> Maybe Int -> Maybe Int -> Maybe Int
chainOK ma mb mc = md
where
md = ma >>= \a -> mb >>= \b -> mc >>= \c -> return (a+b+c)

haskell function definition IO

Give a definition of the function
fmap :: (a->b) -> IO a -> IO b
the effect of which is to transform an interaction by applying the function to its result. you should define it using the do construct.
how should I define the fmap? I have no idea about it?
could someone help me with that?
Thanks~!
It looks like homework or something so I will give you enough hint so that you can work rest of the details yourself.
fmap1 :: (a -> b) -> IO a -> IO b
fmap1 f action =
action is as IO action and f is a function from a to b and hence type a -> b.
If you are familiar with monadic bind >>= which has type (simplified for IO monad)
(>>=) :: IO a -> (a -> IO b) -> IO b
Now if you look at
action >>= f
It means perform the IO action which returns an output (say out of type a) and pass the output to f which is of type a -> IO b and hence f out is of type IO b.
If you look at the second function called return which has type (again simlified for IO monad)
return :: a -> IO a
It takes a pure value of type a and gives an IO action of type IO a.
Now lets look back to fmap.
fmap1 f action
which performs the IO action and then runs f on the output of the action and then converts the output to another IO action of type IO b. Therefore
fmap1 f action = action >>= g
where
g out = return (f out)
Now comes the syntactic sugar of do notation. Which is just to write bind >>= in another way.
In do notation you can get the output of an action by
out <- action
So bind just reduces to
action >>= f = do
out <- action
f out
I think now you will be able to convert the definition of fmap to do construct.
Are you familiar with map?
The type of map is
map :: (a -> b) - > [a] -> [b]
if you run
map (*5) [1,2,3]
you get
[5,10,15]
The point of map is to give it a transform function and a source list and have it apply the transform to the list to get a result list.
map is fmap for lists. They want you to write an fmap for IO types, does this help?
if You want to know more about fmap read http://learnyouahaskell.com/making-our-own-types-and-typeclasses#the-functor-typeclass
Note that every monad is a already a functor. If you want to reimplement fmap, however, you can do that in terms of monadic functions easily. One monad law is this:
fmap f xs = xs >>= return . f
If you understand do notation enough, you should be able to translate that yourself. If not, just ask.

Monads - Definition, Laws and Example [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What is a monad?
I am learning to program in the functional language of Haskell and I came across Monads when studying parsers. I had never heard of them before and so I did some extra studying to find out what they are.
Everywhere I look in order to learn this topic just confuses me. I can't really find a simple definition of what a Monad is and how to use them. "A monad is a way to structure computations in terms of values and sequences of computations using those values" - eh???
Can someone please provide a simple definition of what a Monad is in Haskell, the laws associated with them and give an example?
Note: I know how to use the do syntax as I have had a look at I/O actions and functions with side-effects.
Intuition
A rough intuition would be that a Monad is a particular kind of container (Functor), for which you have two operations available. A wrapping operation return that takes a single element into a container. An operation join that merges a container of containers into a single container.
return :: Monad m => a -> m a
join :: Monad m => m (m a) -> m a
So for the Monad Maybe you have:
return :: a -> Maybe a
return x = Just x
join :: Maybe (Maybe a) -> Maybe a
join (Just (Just x) = Just x
join (Just Nothing) = Nothing
join Nothing = Nothing
Likewise for the Monad [ ] these operations are defined to be:
return :: a -> [a]
return x = [x]
join :: [[a]] -> [a]
join xs = concat xs
The standard mathematical definition of Monad is based on these return and join operators. However in Haskell the definition of the class Monad substitutes a bind operator for join.
Monads in Haskell
In functional programming languages these special containers are typically used to denote effectful computations. The type Maybe a would represent a computation that may or may not succeed, and the type [a] a computation that is non-deterministic. Particularly we're interested in functions with effects, i.e.those with types a->m b for some Monad m. And we need to be able to compose them. This can be done using either a monadic composition or bind operator.
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c
(>>=) :: Monad m => m a -> (a -> m b) -> m b
In Haskell the latter is the standard one. Note that its type is very similar to the type of the application operator (but with flipped arguments):
(>>=) :: Monad m => m a -> (a -> m b) -> m b
flip ($) :: a -> (a -> b) -> b
It takes an effectful function f :: a -> m b and a computation mx :: m a returning values of type a, and performs the application mx >>= f. So how do we do this with Monads? Containers (Functors) can be mapped, and in this case the result is a computation within a computation which can then be flattened:
fmap f mx :: m (m b)
join (fmap f mx) :: m b
So we have:
(mx >>= f) = join (fmap f mx) :: m b
To see this working in practise consider a simple example with lists (non-deterministic functions). Suppose you have a list of possible results mx = [1,2,3] and a non-deterministic function f x = [x-1, x*2]. To calculate mx >>= f you begin by mapping mx with f and then you merge the results::
fmap f mx = [[0,2],[1,4],[2,6]]
join [[0,2],[1,4],[2,6]] = [0,2,1,4,2,6]
Since in Haskell the bind operator (>>=) is more important than join, for efficiency reasons in the latter is defined from the former and not the other way around.
join mx = mx >>= id
Also the bind operator, being defined with join and fmap, can also be used to define a mapping operation. For this reason Monads are not required to be instances of the class Functor. The equivalent operation to fmap is called liftM in the Monad library.
liftM f mx = mx >>= \x-> return (f x)
So the actual definitions for the Monads Maybe becomes:
return :: a -> Maybe a
return x = Just x
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
Nothing >>= f = Nothing
Just x >>= f = f x
And for the Monad [ ]:
return :: a -> [a]
return x = [x]
(>>=) :: [a] -> (a -> [b]) -> [b]
xs >>= f = concat (map f xs)
= concatMap f xs -- same as above but more efficient
When designing your own Monads you may find it easier to, instead of trying to directly define (>>=), split the problem in parts and figure out what how to map and join your structures. Having map and join can also be useful to verify that your Monad is well defined, in the sense that it satisfy the required laws.
Monad Laws
Your Monad should be a Functor, so the mapping operation should satisfy:
fmap id = id
fmap g . fmap f = fmap (g . f)
The laws for return and join are:
join . return = id
join . fmap return = id
join . join = join . fmap join
The first two laws specify that merging undoes wrapping. If you wrap a container in another one, join gives you back the original. If you map the contents of a container with a wrapping operation, join again gives you back what you initially had. The last law is the associativity of join. If you have three layers of containers you get the same result by merging from the inside or the outside.
Again you can work with bind instead of join and fmap. You get fewer but (arguably) more complicated laws:
return a >>= f = f a
m >>= return = m
(m >>= f) >>= g = m >>= (\x -> f x >>= g)
A monad in Haskell is something that has two operations defined:
(>>=) :: Monad m => m a -> (a -> m b) -> m b -- also called bind
return :: Monad m => a -> m a
These two operations need to satisfy certain laws that really might just confuse you at this point, if you don't have a knack for mathy ideas. Conceptually, you use bind to operate on values on a monadic level and return to create monadic values from "trivial" ones. For instance,
getLine :: IO String,
so you cannot modify and putStrLn this String -- because it's not a String but an IO String!
Well, we have an IO Monad handy, so not to worry. All we have to do is use bind to do what we want. Let's see what bind looks like in the IO Monad:
(>>=) :: IO a -> (a -> IO b) -> IO b
And if we place getLine at the left hand side of bind, we can make it more specific yet.
(>>=) :: IO String -> (String -> IO b) -> IO b
Okay, so getLine >>= putStrLn . (++ ". No problem after all!") would print the entered line with the extra content added. The right hand side is a function that takes a String and produces an IO () - that wasn't hard at all! We just go by the types.
There are Monads defined for a lot of different types, for instance Maybe and [a], and they behave conceptually in the same way.
Just 2 >>= return . (+2) would yield Just 4, as you might expect. Note that we had to use return here, because otherwise the function on the right hand side would not match the return type m b, but just b, which would be a type error. It worked in the case of putStrLn because it already produces an IO something, which was exactly what our type needed to match. (Spoiler: Expressions of shape foo >>= return . bar are silly, because every Monad is a Functor. Can you figure out what that means?)
I personally think that this is as far as intuition will get you on the topic of monads, and if you want to dive deeper, you really do need to dive into the theory. I liked getting a hang of just using them first. You can look up the source for various Monad instances, for instance the List ([]) Monad or Maybe Monad on Hoogle and get a bit smarter on the exact implementations. Once you feel comfortable with that, have a go at the actual monad laws and try to gain a more theoretical understanding for them!
The Typeclassopedia has a section about Monad (but do read the preceding sections about Functor and Applicative first).

Resources