I know the functions are all about composition. For example, if I have an arrow from A to B and an arrow from B to C, composition means I also have an arrow from A to C.
But for (>>=), its type is Monad m => m a -> (a -> m b) -> m b. Why m a is equal to a here?
I was wondering why not Monad m => m a -> (m a -> m b) -> m b? Does this make more sense?
#Sibi's answer is right, it wouldn't make sense or be useful for monads to be defined like the second signature. But related to your question about relation between function composition and monadic composition, there is an alternative way of looking at the operator.
Monads have a bunch of alternative constructions equivalent to the bind/return formulation. One of them is in terms of an operator (<=<) called the Kleisli composition operator that composes monadic operations in a structurally similar way to how functions compose.
Arrows:
Functions : a -> b
Monadic operations : a -> m b
Composition:
-- Function composition
(.) :: (b -> c) -> (a -> b) -> a -> c
f . g = \x -> g (f x)
-- Monad composition
(<=<) :: Monad m => (b -> m c) -> (a -> m b) -> a -> m c
f <=< g ≡ \x -> g x >>= f
Gabriel Gonzalez wrote a nice blog post about this pattern: http://www.haskellforall.com/2012/08/the-category-design-pattern.html
Try to implement that function yourself and you will find that it would be useless:
func :: Monad m => m a -> (m a -> m b) -> m b
func x f = f x
So what you are basically suggesting is applying a value to a function. I don't think we need a special function for that. :-) The whole point of >>= is that it executes the side effect of it's first parameter and then passes the resultant value of it to the function.
>>= does not correspond to composition, it corresponds to (flipped) application. The flipped version =<< makes this clear:
($) :: (a -> b) -> a -> b
(=<<) :: (Monad m) => (a -> m b) -> m a -> m b
($) takes a unary function and applies it to a value, giving a value; =<< takes a unary action and applies it to the result of a nullary action, giving a nullary action.
The operators corresponding to composition are <=< and >=> from Control.Monad:
(.) :: (b -> c) -> (a -> b) -> a -> c
(<=<) :: (Monad m) => (b -> m c) -> (a -> m b) -> a -> m c
(.) composes two unary functions, giving a unary function; <=< composes two unary actions, giving a unary action.
If you reorder the type signature, it would make a lot more sense to you....
Let modify be (>>=) with flipped order
modify::Monad m => (a -> m b) -> m a -> m b
or, adding implied parentheses
modify::Monad m => (a -> m b) -> (m a -> m b)
Now it is clear what is happening.... We are given a "lopsided" function that can't be placed in a pipeline of fixed data type, and converting it to a function that can.... These functions are nicer to deal with, as you can add and remove them at will to the pipeline, apply N times, even reorder them. (well, for a=b, at least)
The pattern is very common.... a->m b, for instance, could be a function that takes a val, and may or may not return a value or error.
Related
I can't understand the difference between Dot (function composition) and bind (>>=) .
If I understand, these two ways take the previous result of a function for a new function.
So what is the difference ?
They are pretty different. Let's look at their signatures:
(.) :: (b -> c) -> (a -> b) -> (a -> c)
(>>=) :: (Monad m) => m a -> (a -> m b) -> m b
As you said, the function composition is just a way to pass a result of one function as an argument to another one like this:
f = g . h
is equivalent to
f x = g (h x)
You can think about is as some kind of a "conveyor", where your value goes through several processing steps.
But (>>=) is quite different. It is related to such context as monad which is something like some value in some context (it's highly recommended to read the previous link if you aren't familiar with it).
So let x be some value in a context. Our context will be nullability (Maybe monad), and the value is 2. So, x = Just 2. We could, for example, get it as a result of a lookup from some associative container (such operation might fail, that's the reason why it is Maybe Int, but not Int).
Now we want to pass our x to some arithmetic function f that accepts just Int and may fail, so its signature looks like:
f :: Int -> Maybe Int
We can't just pass our value because of type mismatch. We could unpack x and handle some cases with if, but we could do that in almost all other languages. In haskell, we can use (>>=):
x >>= f
This allows as to chain the effects:
if x is Nothing, then the result is Nothing immediately
else x is unpacked and passed to f
This is a generalization of the operator ?., that you could see in some languages:
x = a?.func1()?.func2();
which checks for null at each "step" and stops immediately if hits null or returns the value in case of success. In haskell it looks like:
x = a >>= func1 >>= func2
However, bind with monads is a much more powerful concept, allowing you, for example, to emulate stateful computations in a language without mutability like haskell.
(>>=) is a form of function application.
(>>=) :: Monad m => m a -> (a -> m b) -> m b
flip ($) :: a -> (a -> b) -> b
It takes a value, but "extracts" part of it in order to apply the given function. Chaining two functions, like x >>= f >>= g requires the argument type of g to be different from (but at the same type similar to) the return type of f, unlike composition, which requires the types to match exactly.
Composed with return, it
really is just function application, but restricted to certain kinds of functions.
flip ($) :: a -> (a -> b) -> b
(>>=) . return :: Monad m => a -> (a -> m b) -> m b
(.) is more like (<=<) (from Control.Monad).
(.) :: (b -> c) -> (a -> b) -> a -> c
(<=<) :: Monad m => (b -> m c) -> (a -> m b) -> a -> m c
But again, instead of simply passing the result of one function to another, it first "extracts" a value before doing application.
I'm trying to improve my understanding of Applicatives and Monads by implementing their function instances in Javascript. My knowledge of Haskell is limited and I hope that my question makes sense at all.
Here are my implementations of fmap, <*> and >>= for the Functor, Applicative and Monad typeclasses in Javascript:
const fmap = f => g => x => f(g(x)); // B combinator
const apply = f => g => x => f(x) (g(x)); // S combinator
const bind = f => g => x => g(f(x)) (x); // ?
I am not sure whether bind is the correct translation of the Haskell implementation:
(>>=) :: (r -> a) -> (a -> (r -> b)) -> r -> b
instance Monad ((->) r) where
f >>= k = \ r -> k (f r) r
Provided that bind is correct, how is it interpreted? I know that an Applicative can sequence effectful computations. I also know that a Monad in addition allows you to determine a next effect according to the result of a previous one.
I can see the sequences (eager evaluation order in Javascript):
apply: f(x) ... g(x) ... lambda(result of g) ... result of lambda
bind: f(x) ... g(result of f) ... lambda(x) ... result of lambda
However, the bind function looks pretty weird. Why are f and g nested the other way around? How is the specific Monad behavior (determines a next effect according to a previous one) reflected in this implementation? Actually g(f(x)) (x) looks like a function composition with flipped arguments, where g is a binary function.
When I apply apply/bind with an unary and a binary function, they yield the same result. This doesn't make much sense.
A few footnotes to Lee's answer:
However, the bind function looks pretty weird. Why are f and g
nested the other way around?
Because bind is backwards. Compare (>>=) and its flipped version (=<<):
(>>=) :: Monad m => m a -> (a -> m b) -> m b
(=<<) :: Monad m => (a -> m b) -> m a -> m b
Or, in your specific example:
(>>=) :: (r -> a) -> (a -> (r -> b)) -> (r -> b)
(=<<) :: (a -> (r -> b)) -> (r -> a) -> (r -> b)
While in practice we tend to use (>>=) more often than (=<<) (because of how (>>=), syntactically speaking, lends itself well to the kind of pipeline monads are often used to build), from a theoretical point of view (=<<) is the most natural way of writing it. In particular, the parallels and differences with fmap/(<$>) and (<*>) are much more obvious:
(<$>) :: Functor f => (a -> b) -> f a -> f b
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
(=<<) :: Monad f => (a -> f b) -> f a -> f b
When I apply apply/bind with an unary and a binary function, they yield the same result. This doesn't make much sense.
That is an accidental fact about the function instances. Let's put the specialised signatures side by side:
(<*>) :: (r -> (a -> b)) -> (r -> a) -> (r -> b)
(=<<) :: (a -> (r -> b)) -> (r -> a) -> (r -> b)
Monad goes beyond Applicative by providing the means to determine the next effect according to previous results (as opposed to "previous effect" -- Applicative can do that already). The effect, in this case, consists of a function that generates values given an argument of type r. Now, since functions with multiple arguments (i.e. functions that return functions) can be flipped, it happens that there is no significant difference between (r -> (a -> b)) and (a -> (r -> b)) (flip can trivially change one into the other), which makes the Monad instance for (->) r entirely equivalent to the Applicative one.
The values in the monad instance for functions have type r -> a for some fixed type r. The function (a -> (r -> b)) given to (>>=) allows you to choose the next function to return given the result from the current value (a function r -> a). f r has type a and k (f r) has type r -> b which is the next function to apply.
In your code g(f(x)) is therefore a function which expects a single argument of type r. The caller of bind can choose this function based on the value returned by the previous function e.g.
var inc = x => x + 1;
var f = bind(inc)(function(i) {
if(i <= 5) { return x => x * 2; }
else { return x => x * 3; }
});
The function will be given x as an input and can choose the next stage in the computation based on the result of inc(x) e.g.
f(2) //4;
f(5) //15;
Looking at Haskell's bind:
Prelude> :t (>>=)
(>>=) :: Monad m => m a -> (a -> m b) -> m b
I was confused by the following example:
Prelude> let same x = x
Prelude> [[1]] >>= \x -> same x
[1]
Looking at >>='s signature, how does \x -> same x type check with a -> m b?
I would've expected \x -> same x to have produced a [b] type, since the Monad m type here is [], as I understand.
You say
I would've expected \x -> same x to have produced a [b] type, since the Monad m type here is [], as I understand.
and so it does because it is.
We have
[[1]] >>= \ x -> same x
=
[[1]] >>= \ x -> x
[[Int]] [Int] -> [Int] :: [Int]
[] [Int] [Int] -> [] Int :: [] Int
m a a m b m b
Sometimes [] is describing a kind of "nondeterminism" effect. Other times, [] is describing a container-like data structure. The fact that it's difficult to tell the difference between which of these two purposes is being served is a feature of which some people are terribly proud. I'm not ready to agree with them, but I see what they're doing.
Looking at >>='s signature, how does \x -> same x type check with a -> m b?
It's actually very simple. Look at the type signatures:
same :: x -> x
(>>=) :: Monad m => m a -> (a -> m b) -> m b
(>>= same) :: Monad m => m a -> (a -> m b) -> m b
|________|
|
x -> x
Therefore:
x := a
-- and
x := m b
-- and by transitivity
a := x := m b
-- or
a := m b
Hence:
(>>= same) :: Monad m => m (m b) -> m b
This is just the join function from the Control.Monad module, and for the list monad it is the same as the concat function. Thus:
[[1]] >>= \x -> same x
-- is the same as the following via eta reduction
[[1]] >>= same
-- is the same as
(>>= same) [[1]]
-- is the same as
join [[1]]
-- is the same as
concat [[1]]
-- evaluates to
[1]
I would've expected \x -> same x to have produced a [b] type, since the Monad m type here is [], as I understand.
Indeed, it does. The \x -> same x function which has the type x -> x is specialized to the type [b] -> [b] as I explained above. Hence, (>>= same) is of the type [[b]] -> [b] which is the same as the concat function. It flattens a list of lists.
The concat function is a specialization of the join function which flattens a nested monad.
It should be noted that a monad can be defined in terms of either >>= or fmap and join. To quote Wikipedia:
Although Haskell defines monads in terms of the return and >>= functions, it is also possible to define a monad in terms of return and two other operations, join and fmap. This formulation fits more closely with the original definition of monads in category theory. The fmap operation, with type Monad m => (a -> b) -> m a -> m b, takes a function between two types and produces a function that does the “same thing” to values in the monad. The join operation, with type Monad m => m (m a) -> m a, “flattens” two layers of monadic information into one.
The two formulations are related as follows:
fmap f m = m >>= (return . f)
join n = n >>= id
m >>= g ≡ join (fmap g m)
Here, m has the type Monad m => m a, n has the type Monad m => m (m a), f has the type a -> b, and g has the type Monad m => a -> m b, where a and b are underlying types.
The fmap function is defined for any functor in the category of types and functions, not just for monads. It is expected to satisfy the functor laws:
fmap id ≡ id
fmap (f . g) ≡ (fmap f) . (fmap g)
The return function characterizes pointed functors in the same category, by accounting for the ability to “lift” values into the functor. It should satisfy the following law:
return . f ≡ fmap f . return
In addition, the join function characterizes monads:
join . fmap join ≡ join . join
join . fmap return ≡ join . return = id
join . fmap (fmap f) ≡ fmap f . join
Hope that helps.
As a few people have commented, you've found a really cute property about monads here. For reference, let's look at the signature for bind:
:: Monad m => m a -> (a -> m b) -> m b
In your case, the type a === m b as you have a [[a]] or m (m a). So, if you rewrite the signature of the above bind operation, you get:
:: Monad m => m (m b) -> ((m b) -> m b) -> m b
I mentioned that this is cute, because by extension, this works for any nested monad. e.g.
:: [[b]] -> ([b] -> [b]) -> [b]
:: Maybe (Maybe b) -> (Maybe b -> Maybe b) -> Maybe b
:: Reader (Reader b) -> (Reader b -> Reader b) -> Reader b
If you look at the function that get's applied here, you'll see that it's the identity function (e.g. id, same, :: forall a. a -> a).
This is included in the standard libraries for Haskell, as join. You can look at the source here on hackage. You'll see it's implemented as bind id, or \mma -> mma >>= id, or (=<<) id
As you say m is []. Then a is [Integer] (ignoring the fact that numbers are polymorphic for simplicity's sake) and b is Integer. So a -> m b becomes [Integer] -> [Integer].
First: we should use the standard version of same, it is called id.
Now, let's rename some type variables
id :: (a'' ~ a) => a -> a''
What this means is: the signature of id is that of a function mapping between two types, with the extra constraint that both types be equal. That's all – we do not require any particular properties, like “being flat”.
Why the hell would I write it this way? Well, if we also rename some of the variables in the bind signature...
(>>=) :: (Monad m, a'~m a, a''~m b) => a' -> (a -> a'') -> a''
...then it is obvious how we can plug the id, as the type variables have already been named accordingly. The type-equality constraint a''~a from id is simply taken to the compound's signature, i.e.
(>>=id) :: (Monad m, a'~m a, a''~m b, a''~a) => a' -> a''
or, simplifying that,
(>>=id) :: (Monad m, a'~m a, m b~a) => a' -> m b
(>>=id) :: (Monad m, a'~m (m b)) => a' -> m b
(>>=id) :: (Monad m) => m (m b) -> m b
So what this does is, it flattens a nested monad to a single application of that same monad. Quite simple, and as a matter of fact this is one the “more fundamental” operation: mathematicians don't define the bind operator, they instead define two morphisms η :: a -> m a (we know that, it's return) and μ :: m (m a) -> m a – yup, that's the one you've just discovered. In Haskell, it's called join.
The monad here is [a] and the example is pointlessly complicated. This’ll be clearer:
Prelude> [[1]] >>= id
[1]
just as
Prelude> [[1]] >>= const [2]
[2]
i.e. >>= is concatMap and is concat when used with id.
New to Haskell, and am trying to figure out this Monad thing. The monadic bind operator -- >>= -- has a very peculiar type signature:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
To simplify, let's substitute Maybe for m:
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
However, note that the definition could have been written in three different ways:
(>>=) :: Maybe a -> (Maybe a -> Maybe b) -> Maybe b
(>>=) :: Maybe a -> ( a -> Maybe b) -> Maybe b
(>>=) :: Maybe a -> ( a -> b) -> Maybe b
Of the three the one in the centre is the most asymmetric. However, I understand that the first one is kinda meaningless if we want to avoid (what LYAH calls boilerplate code). However, of the next two, I would prefer the last one. For Maybe, this would look like:
When this is defined as:
(>>=) :: Maybe a -> (a -> b) -> Maybe b
instance Monad Maybe where
Nothing >>= f = Nothing
(Just x) >>= f = return $ f x
Here, a -> b is an ordinary function. Also, I don't immediately see anything unsafe, because Nothing catches the exception before the function application, so the a -> b function will not be called unless a Just a is obtained.
So maybe there is something that isn't apparent to me which has caused the (>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b definition to be preferred over the much simpler (>>=) :: Maybe a -> (a -> b) -> Maybe b definition? Is there some inherent problem associated with the (what I think is a) simpler definition?
It's much more symmetric if you think in terms the following derived function (from Control.Monad):
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> (a -> m c)
(f >=> g) x = f x >>= g
The reason this function is significant is that it obeys three useful equations:
-- Associativity
(f >=> g) >=> h = f >=> (g >=> h)
-- Left identity
return >=> f = f
-- Right identity
f >=> return = f
These are category laws and if you translate them to use (>>=) instead of (>=>), you get the three monad laws:
(m >>= g) >>= h = m >>= \x -> (g x >>= h)
return x >>= f = f x
m >>= return = m
So it's really not (>>=) that is the elegant operator but rather (>=>) is the symmetric operator you are looking for. However, the reason we usually think in terms of (>>=) is because that is what do notation desugars to.
Let us consider one of the common uses of the Maybe monad: handling errors. Say I wanted to divide two numbers safely. I could write this function:
safeDiv :: Int -> Int -> Maybe Int
safeDiv _ 0 = Nothing
safeDiv n d = n `div` d
Then with the standard Maybe monad, I could do something like this:
foo :: Int -> Int -> Maybe Int
foo a b = do
c <- safeDiv 1000 b
d <- safeDiv a c -- These last two lines could be combined.
return d -- I am not doing so for clarity.
Note that at each step, safeDiv can fail, but at both steps, safeDiv takes Ints, not Maybe Ints. If >>= had this signature:
(>>=) :: Maybe a -> (a -> b) -> Maybe b
You could compose functions together, then give it either a Nothing or a Just, and either it would unwrap the Just, go through the whole pipeline, and re-wrap it in Just, or it would just pass the Nothing through essentially untouched. That might be useful, but it's not a monad. For it to be of any use, we have to be able to fail in the middle, and that's what this signature gives us:
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
By the way, something with the signature you devised does exist:
flip fmap :: Maybe a -> (a -> b) -> Maybe b
The more complicated function with a -> Maybe b is the more generic and more useful one and can be used to implement the simple one. That doesn't work the other way around.
You can build a a -> Maybe b function from a function f :: a -> b:
f' :: a -> Maybe b
f' x = Just (f x)
Or, in terms of return (which is Just for Maybe):
f' = return . f
The other way around is not necessarily possible. If you have a function g :: a -> Maybe b and want to use it with the "simple" bind, you would have to convert it into a function a -> b first. But this doesn't usually work, because g might return Nothing where the a -> b function needs to return a b value.
So generally the "simple" bind can be implemented in terms of the "complicated" one, but not the other way around. Additionally, the complicated bind is often useful and not having it would make many things impossible. So by using the more generic bind monads are applicable to more situations.
The problem with the alternative type signature for (>>=) is that it only accidently works for the Maybe monad, if you try it out with another monad (i.e. List monad) you'll see it breaks down at the type of b for the general case. The signature you provided doesn't describe a monadic bind and the monad laws can't don't hold with that definition.
import Prelude hiding (Monad, return)
-- assume monad was defined like this
class Monad m where
(>>=) :: m a -> (a -> b) -> m b
return :: a -> m a
instance Monad Maybe where
Nothing >>= f = Nothing
(Just x) >>= f = return $ f x
instance Monad [] where
m >>= f = concat (map f m)
return x = [x]
Fails with the type error:
Couldn't match type `b' with `[b]'
`b' is a rigid type variable bound by
the type signature for >>= :: [a] -> (a -> b) -> [b]
at monadfail.hs:12:3
Expected type: a -> [b]
Actual type: a -> b
In the first argument of `map', namely `f'
In the first argument of `concat', namely `(map f m)'
In the expression: concat (map f m)
The thing that makes a monad a monad is how 'join' works. Recall that join has the type:
join :: m (m a) -> m a
What 'join' does is "interpret" a monad action that returns a monad action in terms of a monad action. So, you can think of it peeling away a layer of the monad (or better yet, pulling the stuff in the inner layer out into the outer layer). This means that the 'm''s form a "stack", in the sense of a "call stack". Each 'm' represents a context, and 'join' lets us join contexts together, in order.
So, what does this have to do with bind? Recall:
(>>=) :: m a -> (a -> m b) -> m b
And now consider that for f :: a -> m b, and ma :: m a:
fmap f ma :: m (m b)
That is, the result of applying f directly to the a in ma is an (m (m b)). We can apply join to this, to get an m b. In short,
ma >>= f = join (fmap f ma)
Let's say that we have two monadic functions:
f :: a -> m b
g :: b -> m c
h :: a -> m c
The bind function is defined as
(>>=) :: m a -> (a -> m b) -> m b
My question is why can not we do something like below. Declare a function which would take a monadic value and returns another monadic value?
f :: a -> m b
g :: m b -> m c
h :: a -> m c
The bind function is defined as
(>>=) :: m a -> (ma -> m b) -> m b
What is in the haskell that restricts a function from taking a monadic value as it's argument?
EDIT: I think I did not make my question clear. The point is, when you are composing functions using bind operator, why is that the second argument for bind operator is a function which takes non-monadic value (b)? Why can't it take a monadic value (mb) and give back mc . Is it that, when you are dealing with monads and the function you would compose will always have the following type.
f :: a -> m b
g :: b -> m c
h :: a -> m c
and h = f 'compose' g
I am trying to learn monads and this is something I am not able to understand.
A key ability of Monad is to "look inside" the m a type and see an a; but a key restriction of Monad is that it must be possible for monads to be "inescapable," i.e., the Monad typeclass operations should not be sufficient to write a function of type Monad m => m a -> a. (>>=) :: Monad m => m a -> (a -> m b) -> m b gives you exactly this ability.
But there's more than one way to achieve that. The Monad class could be defined like this:
class Functor f where
fmap :: (a -> b) -> f a -> f b
class Functor f => Monad m where
return :: a -> m a
join :: m (m a) -> m a
You ask why could we not have a Monad m => m a -> (m a -> m b) -> m b function. Well, given f :: a -> b, fmap f :: ma -> mb is basically that. But fmap by itself doesn't give you the ability to "look inside" a Monad m => m a yet not be able to escape from it. However join and fmap together give you that ability. (>>=) can be written generically with fmap and join:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
ma >>= f = join (fmap f ma)
In fact this is a common trick for defining a Monad instance when you're having trouble coming up with a definition for (>>=)—write the join function for your would-be monad, then use the generic definition of (>>=).
Well, that answers the "does it have to be the way it is" part of the question with a "no." But, why is it the way it is?
I can't speak for the designers of Haskell, but I like to think of it this way: in Haskell monadic programming, the basic building blocks are actions like these:
getLine :: IO String
putStrLn :: String -> IO ()
More generally, these basic building blocks have types that look like Monad m => m a, Monad m => a -> m b, Monad m => a -> b -> m c, ..., Monad m => a -> b -> ... -> m z. People informally call these actions. Monad m => m a is a no-argument action, Monad m => a -> m b is a one-argument action, and so on.
Well, (>>=) :: Monad m => m a -> (a -> m b) -> m b is basically the simplest function that "connects" two actions. getLine >>= putStrLn is the action that first executes getLine, and then executes putStrLn passing it the result that was obtained from executing getLine. If you had fmap and join and not >>= you'd have to write this:
join (fmap putStrLn getLine)
Even more generally, (>>=) embodies a notion much like a "pipeline" of actions, and as such is the more useful operator for using monads as a kind of programming language.
Final thing: make sure you are aware of the Control.Monad module. While return and (>>=) are the basic functions for monads, there's endless other more high-level functions that you can define using those two, and that module gathers a few dozen of the more common ones. Your code should not be forced into a straitjacket by (>>=); it's a crucial building block that's useful both on its own and as a component for larger building blocks.
why can not we do something like below. Declare a function which would take a monadic value and returns another monadic value?
f :: a -> m b
g :: m b -> m c
h :: a -> m c
Am I to understand that you wish to write the following?
compose :: (a -> m b) -> (m b -> m c) -> (a -> m c)
compose f g = h where
h = ???
It turns out that this is just regular function composition, but with the arguments in the opposite order
(.) :: (y -> z) -> (x -> y) -> (x -> z)
(g . f) = \x -> g (f x)
Let's choose to specialize (.) with the types x = a, y = m b, and z = m c
(.) :: (m b -> m c) -> (a -> m b) -> (a -> m c)
Now flip the order of the inputs, and you get the desired compose function
compose :: (a -> m b) -> (m b -> m c) -> (a -> m c)
compose = flip (.)
Notice that we haven't even mentioned monads anywhere here. This works perfectly well for any type constructor m, whether it is a monad or not.
Now let's consider your other question. Suppose we want to write the following:
composeM :: (a -> m b) -> (b -> m c) -> (a -> m c)
Stop. Hoogle time. Hoogling for that type signature, we find there is an exact match! It is >=> from Control.Monad, but notice that for this function, m must be a monad.
Now the question is why. What makes this composition different from the other one such that this one requires m to be a Monad, while the other does not? Well, the answer to that question lies at the heart of understanding what the Monad abstraction is all about, so I'll leave a more detailed answer to the various internet resources that speak about the subject. Suffice it to say that there is no way to write composeM without knowing something about m. Go ahead, try it. You just can't write it without some additional knowledge about what m is, and the additional knowledge necessary to write this function just happens to be that m has the structure of a Monad.
Let me paraphrase your question a little bit:
why can't don't we use functions of type g :: m a -> m b with Monads?
The answer is, we do already, with Functors. There's nothing especially "monadic" about fmap f :: Functor m => m a -> m b where f :: a -> b. Monads are Functors; we get such functions just by using good old fmap:
class Functor f a where
fmap :: (a -> b) -> f a -> f b
If you have two functions f :: m a -> m b and a monadic value x :: m a, you can simply apply f x. You don't need any special monadic operator for that, just function application. But a function such as f can never "see" a value of type a.
Monadic composition of functions is much stronger concept and functions of type a -> m b are the core of monadic computations. If you have a monadic value x :: m a, you cannot "get into it" to retrieve some value of type a. But, if you have a function f :: a -> m b that operates on values of type a, you can compose the value with the function using >>= to get x >>= f :: m b. The point is, f "sees" a value of type a and can work with it (but it cannot return it, it can only return another monadic value). This is the benefit of >>= and each monad is required to provide its proper implementation.
To compare the two concepts:
If you have g :: m a -> m b, you can compose it with return to get g . return :: a -> m b (and then work with >>=), but
not vice versa. In general there is no way of creating a function of type m a -> m b from a function of type a -> m b.
So composing functions of types like a -> m b is a strictly stronger concept than composing functions of types like m a -> m b.
For example: The list monad represents computations that can give a variable number of answers, including 0 answers (you can view it as non-deterministic computations). The key elements of computing within list monad are functions of type a -> [b]. They take some input and produce a variable number of answers. Composition of these functions takes the results from the first one, applies the second function to each of the results, and merges it into a single list of all possible answers.
Functions of type [a] -> [b] would be different: They'd represent computations that take multiple inputs and produce multiple answers. They can be combined too, but we get something less strong than the original concept.
Perhaps even more distinctive example is the IO monad. If you call getChar :: IO Char and used only functions of type IO a -> IO b, you'd never be able to work with the character that was read. But >>= allows you to combine such a value with a function of type a -> IO b that can "see" the character and do something with it.
As others have pointed out, there is nothing that restricts a function to take a monadic value as argument. The bind function itself takes one, but not the function that is given to bind.
I think you can make this understandable to yourself with the "Monad is a Container" metaphor. A good example for this is Maybe. While we know how to unwrap a value from the Maybe conatiner, we do not know it for every monad, and in some monads (like IO) it is entirely impossible.
The idea is now that the Monad does this behind the scenes in a way you don't have to know about. For example, you indeed need to work with a value that was returned in the IO monad, but you cannot unwrap it, hence the function that does this needs to be in the IO monad itself.
I like to think of a monad as a recipe for constructing a program with a specific context. The power that a monad provides is the ability to, at any stage within your constructed program, branch depending upon the previous value. The usual >>= function was chosen as being the most generally useful interface to this branching ability.
As an example, the Maybe monad provides a program that may fail at some stage (the context is the failure state). Consider this psuedo-Haskell example:
-- take a computation that produces an Int. If the current Int is even, add 1.
incrIfEven :: Monad m => m Int -> m Int
incrIfEven anInt =
let ourInt = currentStateOf anInt
in if even ourInt then return (ourInt+1) else return ourInt
In order to branch based on the current result of a computation, we need to be able to access that current result. The above psuedo-code would work if we had access to currentStateOf :: m a -> a, but that isn't generally possible with monads. Instead we write our decision to branch as a function of type a -> m b. Since the a isn't in a monad in this function, we can treat it like a regular value, which is much easier to work with.
incrIfEvenReal :: Monad m => m Int -> m Int
incrIfEvenReal anInt = anInt >>= branch
where branch ourInt = if even ourInt then return (ourInt+1) else return ourInt
So the type of >>= is really for ease of programming, but there are a few alternatives that are sometimes more useful. Notably the function Control.Monad.join, which when combined with fmap gives exactly the same power as >>= (either can be defined in terms of the other).
The reason (>>=)'s second argument does not take a monad as input is because there is no need to bind such a function at all. Just apply it:
m :: m a
f :: a -> m b
g :: m b -> m c
h :: c -> m b
(g (m >>= f)) >>= h
You don't need (>>=) for g at all.
The function can take a monadic value if it wants. But it is not forced to do so.
Consider the following contrived definitions, using the list monad and functions from Data.Char:
m :: [[Int]]
m = [[71,72,73], [107,106,105,104]]
f :: [Int] -> [Char]
f mx = do
g <- [toUpper, id, toLower]
x <- mx
return (g $ chr x)
You can certainly run m >>= f; the result will have type [Char].
(It's important here that m :: [[Int]] and not m :: [Int]. >>= always "strips off" one monadic layer from its first argument. If you don't want that to happen, do f m instead of m >>= f.)
As others have mentioned, nothing restricts such functions from being written.
There is, in fact, a large family of functions of type :: m a -> (m a -> m b) -> m b:
f :: Monad m => Int -> m a -> (m a -> m b) -> m b
f n m mf = replicateM_ n m >>= mf m
where
f 0 m mf = mf m
f 1 m mf = m >> mf m
f 2 m mf = m >> m >> mf m
... etc. ...
(Note the base case: when n is 0, it's simply normal functional application.)
But what does this function do? It performs a monadic action multiple times, finally throwing away all the results, and returning the application of mf to m.
Useful sometimes, but hardly generally useful, especially compared to >>=.
A quick Hoogle search doesn't turn up any results; perhaps a telling result.