With monads, can join be defined in terms of bind? - haskell

In Haskell, monads are defined in terms of the functions return and bind, where return has type a -> m a and bind has type m a -> (a -> m b) -> m b. It's been pointed out before that monads can also be defined in terms of return and join, where join is a function with type m (m a) -> m a. Bind can be defined in terms of join, but is the reverse possible? Can join be defined in terms of bind?
Without join, I have no idea what I'd do if I ever somehow got ahold of a "twice wrapped" monadic value, m (m a) - none of the functor or monad operations "remove any layers", so to speak. If this isn't possible, why do Haskell and many other monad implementations define them in terms of bind? It seems strictly less useful than a join-based definition.

It is possible:
join :: Monad m => m (m a) -> m a
join m = (m >>= id)
Note the tricky instantiation of >>=:
(>>=) :: m b -> (b -> m c) -> m c
-- choosing b ~ m a , c ~ a
(>>=) :: m (m a) -> (m a -> m a) -> m a
so we can correctly choose id for the second argument.

Although the question has already been sufficiently answered, I thought it was worth noting the following for any Haskell newcomers.
One of the first things you see when learning about monads in Haskell is the definition for lists:
instance Monad [] where
return x = [x]
xs >>= f = concatMap f xs
Here, we see that the functionality of bind for lists is equivalent to concatMap, just with the arguments flipped around. This makes sense when inspecting the types:
concatMap :: (a -> [b]) -> [a] -> [b]
bind :: Monad m => (a -> m b ) -> m a -> m b -- (>>=) flips the arguments
It also makes intuitive sense that the definition of join for lists is equivalent to
concat :: [[a]] -> [a].
The names of the functions may make it a little obvious, but concat can be recovered from concatMap by keeping the internal lists as they are, in order to cancel out the "map" part of concatMap:
concatMap id xs
= concat (map id xs)
= concat ( id xs)
= concat xs -- or simply 'concat = concatMap id'
The same property holds true for monads in general:
join x = x >>= id -- or 'join = bind id'
This really comes from the fact that
bind f m = join (fmap f m)
so that
bind id m = join (fmap id m) -- bind id = join . fmap id
= join ( id m) -- = join . id
= join m -- = join
because all monads are Functors first, and by Functor laws fmap id === id.

Yes it's fairly simple:
join m = m >>= id

Bind (>>=) does in fact "remove a layer":
(>>=) :: Monad m => m a -> (a -> m b) -> m b
Intuitively it "gets some as out of the m a", and feeds then to the a -> m b function, and then produces a single m b from the results.
People usually say that it requires the function argument to wrap up its output again in m, but that's not actually the case. It requires the function's output to be something wrapped up in m, but it doesn't matter where the wrapping came from.
In the case of implementing join we're starting from something "double-wrapped": m (m a). We can plug that into the signature for bind and immediately figure out the type of function we could use when binding a "double-wrapped" value:
m (m a) -> (m a -> m b) -> m b
Now the function used with bind here is going to receive a value that's already wrapped in m. So we don't have to "re-wrap" anything; if we return it unmodified it'll already be the right type for the output. Effectively that's "removed one layer of wrapping" - and this works for any layer but the last one.
So that tells us we just have to bind with id:
join = (>>= id)

Can join be defined in terms of bind?
TL;DR answer: Yes.
join ∷ (Monad m) ⇒ m (m a) → m a
join = (=<<) id
The longer answer:
To add some subtleties that have yet to be mentioned I'll provide a new answer, starting by expanding upon Lee's answer, because it is worth noting that their answer can be simplified. Starting with the original:
join ∷ (Monad m) ⇒ m (m a) → m a
join m = m >>= id
One can look for an Eta conversion (η-conversion) opportunity to make the function definition point-free. To do this we want to first rewrite our function definition without the infix >>= (as would likely be done if we were calling >>= by the name bind in the first place).
join m = (>>=) m id
Now observe that if we use the flip function, recalling:
-- defined in Data.Function
-- for a function of two arguments, swap their order
flip ∷ (a → b → c) → b → a → c
flip f b a = f a b
One may now use flip to put the m in position for an η-reduction:
join m = (flip (>>=)) id m
Applying the η-reduction:
join = (flip (>>=)) id
Noticing now that flip (>>=) can be replaced with (=<<) (defined in Control.Monad):
join = (=<<) id
Finally we can see shorter, point-free definition:
join ∷ (Monad m) ⇒ m (m a) → m a
join = (=<<) id
Where (=<<) has type:
(=<<) ∷ ∀ (m ∷ * -> *) a b. (Monad m) ⇒ (a → m b) → m a → m b
which in the process gets instantiated to:
(=<<) ∷ (m a → m a) → m (m a) → m a
Additionally, one may also notice that if we put the code above back into infix form, the flip becomes implicit, and we get the same final answer as Ben does:
join = (>>= id)

Related

What is the mathematical theory or theorem underlying join of monad?

For example:
Maybe (Maybe Bool) -> Maybe Bool
Just (Just True) -----> Just True
Just (Just False) ----> Just False
Just (Nothing) -------> Nothing
Nothing --------------> ?
It would map Nothing to Nothing. What is the mathematical theory or theorem underlying it?
If related to category theory, what part is it related to?
Is there a mathematical theory related to the Behavior of join of State, Writer, Reader, etc?
Is there any guarantee that m(m a) -> m a is safe?
edited:
Since result type a of m a -> a forgets the structure. so instead in order not to forget, result type m a of m (m a) -> m a is used for the compound effect of outer - m (...) with effect of inner - m.
Strictly speaking, two information(effect) was compounded into one only before the structure disappeared. The structure no longer exists.
I thought it was important to guarantee that there was no problem in doing so. Is it up to the programmer without any special rules or theory?
The compound doesn't look natural to me, it looks artificial.
Sorry for the vague question, thanks for all the comments.
The Mathematical definition of a monad, with translation to Haskell:
A couple of incidental preliminary notes
I'm going to assume you're familiar with Functors in Haskell. If not I'm tempted to direct you to my explanation on this other question here. I'm not going to explain category theory to you except by translating it into Haskell as best I can.
Identity functor vs Identity Functor instance.
Note: Firstly let me point out that the identity functor in mathematics does nothing, whereas the Identity functor in Haskell adds a newtype wrapper. Whenever we use the mathematical identity functor on a type a we should just get a back, so I won't be using the Identity functor instance.
Natural transformations
Secondly, note that a natural transformation between two functors (either possibly the identity), in Haskell is a polymorphic function e between two types made by (possibly mathematical identity) Functor instances, for example [a] -> Maybe a or (Int,a) -> Either String a such that e . fmap f == fmap f . e.
So safeLast : [a] -> Maybe a is a natural transformation, because safeLast (map f xs) == fmap f (safeLast xs), and even
rejectSomeSmallNumbers :: (Int,a) -> Either String a
rejectSomeSmallNumbers (i,a) = case i of
0 -> Left "Way too small!"
1 -> Left "Too small!"
2 -> Left "Two is small."
3 -> Left "Three is small, too."
_ -> Right a
is a natural transformation because rejectSomeSmallNumbers . fmap f == fmap f . rejectSomeSmallNumbers :: (Int,a) -> Either String b.
A natural transformation can use as much information as it likes about the two functors it connects (eg (,) Int and Either String) but it can't use any information about the type a any more than the functors can. It shouldn't be possible to write a polymorphic function between two valid functor types that's not a natural transformation. See this answer for more information.
What is a monad according to Maths and Haskell?
Let H be a category (let Hask be the kind of all haskell types together with function types, functions, etc etc).
A monad on H is (a monad in Hask is)
an endofunctor M : H -> H
a type constructor m: * -> * which has a Functor instance with fmap :: (a -> b) -> (m a -> m b)
and a natural transformation eta : 1_H -> M
a polymorphic function from a -> m a, called pure defined in the Applicative instance
a natural transformation mu : M^2 -> M
a polymorphic function from m (m a) -> m a, called join that is defined in Control.Monad
such that the following two rules hold:
mu . M mu == mu . mu M as natural transformations M^3 -> M
join . fmap join == join . join :: m (m (m a)) -> m a
mu . M eta == mu . eta M == 1_H as natural transformations M -> M
join . fmap pure == join . pure == id :: m a -> m a
What do these two rules mean?
Just to give you a handle on what these two conditions are saying, here they are when we're using lists:
(join . fmap join) [xss, yss, zss]
== join [join xss, join yss, join zss]
== join (join [xss, yss, zss])
and
join (fmap pure xs)
== join [[x] | x <- xs]
== xs
== id xs
== join [xs]
== join (pure xs)
(Fun fact, join isn't part of the monad definition. I have a perhaps unreliable memory that it used to be, but in Control.Monad it's defined as join x = x >>= id and as commented there, it could be defined as join bss = do { bs <- bss ; bs })
What does this mean for the Maybe monad in your example?
Well firstly, because join is polymorphic (mu is a natural transformation), it can't use any information about the type a in Maybe a, so we couldn't for example make it so that join (Just (Just False)) = Just True or join (Just Nothing) = Just False because we can only use values that are already in the Maybe a we're given:
join :: Maybe (Maybe a) -> Maybe a
join Nothing = Nothing -- can't provide Just a because we have no a
join (Just Nothing) = Nothing -- same reason
join (Just (Just a)) =
-- two choices: we could do the obviously correct `Just a` or collapse everything with `Nothing`.
pure :: a -> Maybe a
pure a =
-- two choices: we could do the obviously correct `Just a` or collapse everything with `Nothing`.
What stops us doing the crazy Nothing thing?
Let's look at the two rules, specialising to Maybe, and to the Just branches, because all the Nothings are inevitably Nothing because of polymorphism.
(join . fmap join) (Just maybemaybe)
== join (Just (join maybemaybe))
== join (join (Just maybemaybe)) -- required by he rule
That one works if we put Just a in the definition, or if we put Nothing, too.
In the second rule:
join (fmap pure (Just a))
== join (Just (pure a))
== join (pure (Just a))
== id (Just a) -- by the rule
== Just a
Well that forces pure to be Just, and at the same time forces join (Just (Just a)) to give us Just a.
Reader
Let's ditch the newtype wrapping to make the laws easier to talk about.
type Reader input a = input -> a
We'd need
join :: Reader input (Reader input a) -> Reader input a
join (make_an_a_maker :: (input -> (input -> a)) :: input -> a
join make_an_a_maker input = (make_an_a_maker input) input
There isn't anything else we can do without using undefined or similar.
So what stops you making crazy join functions?
Most of the time, the fact that you're making a polymorphic function, some of the time because you want to do the obviously correct thing and it works, and the rest of the time because you chose to follow the rules.
Not-relevant nerd note:
I prefer to think of Monads as type constructors m so that Kleisli composition is associative, with the unit being pure:
(>=>) :: (a -> m b)
-> ( b -> m c)
-> (a -> m c)
(first >=> second) a = do
b <- first a
c <- second b
return c
or if you prefer
(first >=> second) a =
first a >>= \b -> second b
so the laws are
(one >=> two) >=> three == one >=> (two >=> three) and
k >=> pure == pure >=> k == k
I think your confusion is over the fact that Nothing is not a single value. It is a polymorphic type that can be specialized to any number of values, depending on how a is fixed:
> :set -XTypeApplications
> :t Nothing
Nothing :: Maybe a
> :t Nothing #Int
Nothing #Int :: Maybe Int
> :t Nothing #Bool
Nothing #Bool :: Maybe Bool
> :t Nothing #(Maybe Bool)
Nothing #(Maybe Bool) :: Maybe (Maybe Bool)
Similarly, join :: Monad m => m (m a) -> m a can be specialized:
> :t join #Maybe
join #Maybe :: Maybe (Maybe a) -> Maybe a
> :t join #Maybe #Bool
join #Maybe #Bool :: Maybe (Maybe Bool) -> Maybe Bool
Maybe (Maybe Bool) has four values:
Just (Just True)
Just (Just False)
Just Nothing
Nothing
Maybe Bool has three values:
Just True
Just False
Nothing
join :: Maybe (Maybe Bool) is not a injection; it maps two different values of type Maybe (Maybe Bool) to the same value of type Maybe Bool:
join (Just (Just True)) == Just True
join (Just (Just False)) == Just False
join (Just Nothing) == Nothing
join Nothing == Nothing
Both Just Nothing :: Maybe (Maybe Bool) and Nothing :: Maybe (Maybe Bool) are mapped to Nothing :: Maybe Bool.

haskell - the type of join function

data M a = M a deriving (Show)
unitM a = M a
bindM (M a) f = f a
joinM :: M (M a) -> M a
joinM m = m `bindM` id
joinM' :: M a -> a
joinM' m = m `bindM` id
Note that joinM (M 0) will fail to type check, whereas joinM' (M 0) will be fine.
My question: why is joinM defined as M (M a) -> M a but not as M a -> a?
From my understanding,
unitM puts the value a into the monad M a
joinM gets the value a from the monad M a
So joinM should really work on any monad, i.e., not necessarily nested ones such as M (M a), right?
The point of monads is that you can't get a value out of them. If join had type m a -> a then the IO monad would be perfectly useless, since you could just extract the values freely. The point of monads is that you can chain computations together (>>= can be defined in terms of join, provided you have return and fmap) and put values into a monadic context, but you can't (in general) get them out.
In your specific case, you've defined what is essentially the identity monad. In that case, it's easy to extract the value; you just strip away the layer of M and move on with your life. But that's not true for general monads, so we restrict the type of join so that more things can be monads.
Your bindM is not of the correct type, by the way. The general type of >>= is
(>>=) :: Monad m => m a -> (a -> m b) -> m b
Your function has type
bindM :: M a -> (a -> b) -> b
Notice that your type is more general. Hence, again, in your specific case, you can get away with being looser on the requirements of joinM, whereas specific monads cannot. Try giving bindM an explicit type signature of M a -> (a -> M b) -> M b and then see if both of your join functions still typecheck.
Given a type constructor M :: * -> *, and a type a, consider the following sequence of types
a, M a, M (M a), M (M (M a)), ...
If we have polymorphic functions return :: b -> M b and extract :: M b -> b (your alternative join), we can convert a value of any type above to any other type above. Indeed, we can add and remove M as wanted using these two functions, choosing the type b suitably. In more casual words, we can move both to the right and to the left in such type sequence.
In a monad, instead, we can move to the right without limits (using return). We can also move to the left almost everywhere: the important exception being that we can not move from M a to a. This is realized by join :: M (M c) -> M c, which has the type of extract :: M b -> b restricted to the case b = M c. So essentially, we can move left (as with extract), but only when we end up in a type which has at least one M -- hence, no further to the left than M a.
As Carl mentions above in the comments this restriction makes it possible to have more monads. For instance, if M = [] is the list monad, we can properly implement return and join but not extract.
return :: a -> [a]
return x = [x]
join :: [[a]] -> [a]
join xss = concat xss
Instead extract :: [a] -> a can not be a total function, since extract [] :: a would be well typed, yet tries to extract a value of type a from the empty list. It is a well-known theoretical result that no total expression can have the polymorphic type ... :: a. We can have undefined :: a, fromJust Nothing :: a, or head [] :: a but all of these are not total, and will raise an error when evaluated.

Understanding "Monad m" in >>=

Looking at Haskell's bind:
Prelude> :t (>>=)
(>>=) :: Monad m => m a -> (a -> m b) -> m b
I was confused by the following example:
Prelude> let same x = x
Prelude> [[1]] >>= \x -> same x
[1]
Looking at >>='s signature, how does \x -> same x type check with a -> m b?
I would've expected \x -> same x to have produced a [b] type, since the Monad m type here is [], as I understand.
You say
I would've expected \x -> same x to have produced a [b] type, since the Monad m type here is [], as I understand.
and so it does because it is.
We have
[[1]] >>= \ x -> same x
=
[[1]] >>= \ x -> x
[[Int]] [Int] -> [Int] :: [Int]
[] [Int] [Int] -> [] Int :: [] Int
m a a m b m b
Sometimes [] is describing a kind of "nondeterminism" effect. Other times, [] is describing a container-like data structure. The fact that it's difficult to tell the difference between which of these two purposes is being served is a feature of which some people are terribly proud. I'm not ready to agree with them, but I see what they're doing.
Looking at >>='s signature, how does \x -> same x type check with a -> m b?
It's actually very simple. Look at the type signatures:
same :: x -> x
(>>=) :: Monad m => m a -> (a -> m b) -> m b
(>>= same) :: Monad m => m a -> (a -> m b) -> m b
|________|
|
x -> x
Therefore:
x := a
-- and
x := m b
-- and by transitivity
a := x := m b
-- or
a := m b
Hence:
(>>= same) :: Monad m => m (m b) -> m b
This is just the join function from the Control.Monad module, and for the list monad it is the same as the concat function. Thus:
[[1]] >>= \x -> same x
-- is the same as the following via eta reduction
[[1]] >>= same
-- is the same as
(>>= same) [[1]]
-- is the same as
join [[1]]
-- is the same as
concat [[1]]
-- evaluates to
[1]
I would've expected \x -> same x to have produced a [b] type, since the Monad m type here is [], as I understand.
Indeed, it does. The \x -> same x function which has the type x -> x is specialized to the type [b] -> [b] as I explained above. Hence, (>>= same) is of the type [[b]] -> [b] which is the same as the concat function. It flattens a list of lists.
The concat function is a specialization of the join function which flattens a nested monad.
It should be noted that a monad can be defined in terms of either >>= or fmap and join. To quote Wikipedia:
Although Haskell defines monads in terms of the return and >>= functions, it is also possible to define a monad in terms of return and two other operations, join and fmap. This formulation fits more closely with the original definition of monads in category theory. The fmap operation, with type Monad m => (a -> b) -> m a -> m b, takes a function between two types and produces a function that does the “same thing” to values in the monad. The join operation, with type Monad m => m (m a) -> m a, “flattens” two layers of monadic information into one.
The two formulations are related as follows:
fmap f m = m >>= (return . f)
join n = n >>= id
m >>= g ≡ join (fmap g m)
Here, m has the type Monad m => m a, n has the type Monad m => m (m a), f has the type a -> b, and g has the type Monad m => a -> m b, where a and b are underlying types.
The fmap function is defined for any functor in the category of types and functions, not just for monads. It is expected to satisfy the functor laws:
fmap id ≡ id
fmap (f . g) ≡ (fmap f) . (fmap g)
The return function characterizes pointed functors in the same category, by accounting for the ability to “lift” values into the functor. It should satisfy the following law:
return . f ≡ fmap f . return
In addition, the join function characterizes monads:
join . fmap join ≡ join . join
join . fmap return ≡ join . return = id
join . fmap (fmap f) ≡ fmap f . join
Hope that helps.
As a few people have commented, you've found a really cute property about monads here. For reference, let's look at the signature for bind:
:: Monad m => m a -> (a -> m b) -> m b
In your case, the type a === m b as you have a [[a]] or m (m a). So, if you rewrite the signature of the above bind operation, you get:
:: Monad m => m (m b) -> ((m b) -> m b) -> m b
I mentioned that this is cute, because by extension, this works for any nested monad. e.g.
:: [[b]] -> ([b] -> [b]) -> [b]
:: Maybe (Maybe b) -> (Maybe b -> Maybe b) -> Maybe b
:: Reader (Reader b) -> (Reader b -> Reader b) -> Reader b
If you look at the function that get's applied here, you'll see that it's the identity function (e.g. id, same, :: forall a. a -> a).
This is included in the standard libraries for Haskell, as join. You can look at the source here on hackage. You'll see it's implemented as bind id, or \mma -> mma >>= id, or (=<<) id
As you say m is []. Then a is [Integer] (ignoring the fact that numbers are polymorphic for simplicity's sake) and b is Integer. So a -> m b becomes [Integer] -> [Integer].
First: we should use the standard version of same, it is called id.
Now, let's rename some type variables
id :: (a'' ~ a) => a -> a''
What this means is: the signature of id is that of a function mapping between two types, with the extra constraint that both types be equal. That's all – we do not require any particular properties, like “being flat”.
Why the hell would I write it this way? Well, if we also rename some of the variables in the bind signature...
(>>=) :: (Monad m, a'~m a, a''~m b) => a' -> (a -> a'') -> a''
...then it is obvious how we can plug the id, as the type variables have already been named accordingly. The type-equality constraint a''~a from id is simply taken to the compound's signature, i.e.
(>>=id) :: (Monad m, a'~m a, a''~m b, a''~a) => a' -> a''
or, simplifying that,
(>>=id) :: (Monad m, a'~m a, m b~a) => a' -> m b
(>>=id) :: (Monad m, a'~m (m b)) => a' -> m b
(>>=id) :: (Monad m) => m (m b) -> m b
So what this does is, it flattens a nested monad to a single application of that same monad. Quite simple, and as a matter of fact this is one the “more fundamental” operation: mathematicians don't define the bind operator, they instead define two morphisms η :: a -> m a (we know that, it's return) and μ :: m (m a) -> m a – yup, that's the one you've just discovered. In Haskell, it's called join.
The monad here is [a] and the example is pointlessly complicated. This’ll be clearer:
Prelude> [[1]] >>= id
[1]
just as
Prelude> [[1]] >>= const [2]
[2]
i.e. >>= is concatMap and is concat when used with id.

Why use such a peculiar function type in monads?

New to Haskell, and am trying to figure out this Monad thing. The monadic bind operator -- >>= -- has a very peculiar type signature:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
To simplify, let's substitute Maybe for m:
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
However, note that the definition could have been written in three different ways:
(>>=) :: Maybe a -> (Maybe a -> Maybe b) -> Maybe b
(>>=) :: Maybe a -> ( a -> Maybe b) -> Maybe b
(>>=) :: Maybe a -> ( a -> b) -> Maybe b
Of the three the one in the centre is the most asymmetric. However, I understand that the first one is kinda meaningless if we want to avoid (what LYAH calls boilerplate code). However, of the next two, I would prefer the last one. For Maybe, this would look like:
When this is defined as:
(>>=) :: Maybe a -> (a -> b) -> Maybe b
instance Monad Maybe where
Nothing >>= f = Nothing
(Just x) >>= f = return $ f x
Here, a -> b is an ordinary function. Also, I don't immediately see anything unsafe, because Nothing catches the exception before the function application, so the a -> b function will not be called unless a Just a is obtained.
So maybe there is something that isn't apparent to me which has caused the (>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b definition to be preferred over the much simpler (>>=) :: Maybe a -> (a -> b) -> Maybe b definition? Is there some inherent problem associated with the (what I think is a) simpler definition?
It's much more symmetric if you think in terms the following derived function (from Control.Monad):
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> (a -> m c)
(f >=> g) x = f x >>= g
The reason this function is significant is that it obeys three useful equations:
-- Associativity
(f >=> g) >=> h = f >=> (g >=> h)
-- Left identity
return >=> f = f
-- Right identity
f >=> return = f
These are category laws and if you translate them to use (>>=) instead of (>=>), you get the three monad laws:
(m >>= g) >>= h = m >>= \x -> (g x >>= h)
return x >>= f = f x
m >>= return = m
So it's really not (>>=) that is the elegant operator but rather (>=>) is the symmetric operator you are looking for. However, the reason we usually think in terms of (>>=) is because that is what do notation desugars to.
Let us consider one of the common uses of the Maybe monad: handling errors. Say I wanted to divide two numbers safely. I could write this function:
safeDiv :: Int -> Int -> Maybe Int
safeDiv _ 0 = Nothing
safeDiv n d = n `div` d
Then with the standard Maybe monad, I could do something like this:
foo :: Int -> Int -> Maybe Int
foo a b = do
c <- safeDiv 1000 b
d <- safeDiv a c -- These last two lines could be combined.
return d -- I am not doing so for clarity.
Note that at each step, safeDiv can fail, but at both steps, safeDiv takes Ints, not Maybe Ints. If >>= had this signature:
(>>=) :: Maybe a -> (a -> b) -> Maybe b
You could compose functions together, then give it either a Nothing or a Just, and either it would unwrap the Just, go through the whole pipeline, and re-wrap it in Just, or it would just pass the Nothing through essentially untouched. That might be useful, but it's not a monad. For it to be of any use, we have to be able to fail in the middle, and that's what this signature gives us:
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
By the way, something with the signature you devised does exist:
flip fmap :: Maybe a -> (a -> b) -> Maybe b
The more complicated function with a -> Maybe b is the more generic and more useful one and can be used to implement the simple one. That doesn't work the other way around.
You can build a a -> Maybe b function from a function f :: a -> b:
f' :: a -> Maybe b
f' x = Just (f x)
Or, in terms of return (which is Just for Maybe):
f' = return . f
The other way around is not necessarily possible. If you have a function g :: a -> Maybe b and want to use it with the "simple" bind, you would have to convert it into a function a -> b first. But this doesn't usually work, because g might return Nothing where the a -> b function needs to return a b value.
So generally the "simple" bind can be implemented in terms of the "complicated" one, but not the other way around. Additionally, the complicated bind is often useful and not having it would make many things impossible. So by using the more generic bind monads are applicable to more situations.
The problem with the alternative type signature for (>>=) is that it only accidently works for the Maybe monad, if you try it out with another monad (i.e. List monad) you'll see it breaks down at the type of b for the general case. The signature you provided doesn't describe a monadic bind and the monad laws can't don't hold with that definition.
import Prelude hiding (Monad, return)
-- assume monad was defined like this
class Monad m where
(>>=) :: m a -> (a -> b) -> m b
return :: a -> m a
instance Monad Maybe where
Nothing >>= f = Nothing
(Just x) >>= f = return $ f x
instance Monad [] where
m >>= f = concat (map f m)
return x = [x]
Fails with the type error:
Couldn't match type `b' with `[b]'
`b' is a rigid type variable bound by
the type signature for >>= :: [a] -> (a -> b) -> [b]
at monadfail.hs:12:3
Expected type: a -> [b]
Actual type: a -> b
In the first argument of `map', namely `f'
In the first argument of `concat', namely `(map f m)'
In the expression: concat (map f m)
The thing that makes a monad a monad is how 'join' works. Recall that join has the type:
join :: m (m a) -> m a
What 'join' does is "interpret" a monad action that returns a monad action in terms of a monad action. So, you can think of it peeling away a layer of the monad (or better yet, pulling the stuff in the inner layer out into the outer layer). This means that the 'm''s form a "stack", in the sense of a "call stack". Each 'm' represents a context, and 'join' lets us join contexts together, in order.
So, what does this have to do with bind? Recall:
(>>=) :: m a -> (a -> m b) -> m b
And now consider that for f :: a -> m b, and ma :: m a:
fmap f ma :: m (m b)
That is, the result of applying f directly to the a in ma is an (m (m b)). We can apply join to this, to get an m b. In short,
ma >>= f = join (fmap f ma)

why can't a function take monadic value and return another monadic value?

Let's say that we have two monadic functions:
f :: a -> m b
g :: b -> m c
h :: a -> m c
The bind function is defined as
(>>=) :: m a -> (a -> m b) -> m b
My question is why can not we do something like below. Declare a function which would take a monadic value and returns another monadic value?
f :: a -> m b
g :: m b -> m c
h :: a -> m c
The bind function is defined as
(>>=) :: m a -> (ma -> m b) -> m b
What is in the haskell that restricts a function from taking a monadic value as it's argument?
EDIT: I think I did not make my question clear. The point is, when you are composing functions using bind operator, why is that the second argument for bind operator is a function which takes non-monadic value (b)? Why can't it take a monadic value (mb) and give back mc . Is it that, when you are dealing with monads and the function you would compose will always have the following type.
f :: a -> m b
g :: b -> m c
h :: a -> m c
and h = f 'compose' g
I am trying to learn monads and this is something I am not able to understand.
A key ability of Monad is to "look inside" the m a type and see an a; but a key restriction of Monad is that it must be possible for monads to be "inescapable," i.e., the Monad typeclass operations should not be sufficient to write a function of type Monad m => m a -> a. (>>=) :: Monad m => m a -> (a -> m b) -> m b gives you exactly this ability.
But there's more than one way to achieve that. The Monad class could be defined like this:
class Functor f where
fmap :: (a -> b) -> f a -> f b
class Functor f => Monad m where
return :: a -> m a
join :: m (m a) -> m a
You ask why could we not have a Monad m => m a -> (m a -> m b) -> m b function. Well, given f :: a -> b, fmap f :: ma -> mb is basically that. But fmap by itself doesn't give you the ability to "look inside" a Monad m => m a yet not be able to escape from it. However join and fmap together give you that ability. (>>=) can be written generically with fmap and join:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
ma >>= f = join (fmap f ma)
In fact this is a common trick for defining a Monad instance when you're having trouble coming up with a definition for (>>=)—write the join function for your would-be monad, then use the generic definition of (>>=).
Well, that answers the "does it have to be the way it is" part of the question with a "no." But, why is it the way it is?
I can't speak for the designers of Haskell, but I like to think of it this way: in Haskell monadic programming, the basic building blocks are actions like these:
getLine :: IO String
putStrLn :: String -> IO ()
More generally, these basic building blocks have types that look like Monad m => m a, Monad m => a -> m b, Monad m => a -> b -> m c, ..., Monad m => a -> b -> ... -> m z. People informally call these actions. Monad m => m a is a no-argument action, Monad m => a -> m b is a one-argument action, and so on.
Well, (>>=) :: Monad m => m a -> (a -> m b) -> m b is basically the simplest function that "connects" two actions. getLine >>= putStrLn is the action that first executes getLine, and then executes putStrLn passing it the result that was obtained from executing getLine. If you had fmap and join and not >>= you'd have to write this:
join (fmap putStrLn getLine)
Even more generally, (>>=) embodies a notion much like a "pipeline" of actions, and as such is the more useful operator for using monads as a kind of programming language.
Final thing: make sure you are aware of the Control.Monad module. While return and (>>=) are the basic functions for monads, there's endless other more high-level functions that you can define using those two, and that module gathers a few dozen of the more common ones. Your code should not be forced into a straitjacket by (>>=); it's a crucial building block that's useful both on its own and as a component for larger building blocks.
why can not we do something like below. Declare a function which would take a monadic value and returns another monadic value?
f :: a -> m b
g :: m b -> m c
h :: a -> m c
Am I to understand that you wish to write the following?
compose :: (a -> m b) -> (m b -> m c) -> (a -> m c)
compose f g = h where
h = ???
It turns out that this is just regular function composition, but with the arguments in the opposite order
(.) :: (y -> z) -> (x -> y) -> (x -> z)
(g . f) = \x -> g (f x)
Let's choose to specialize (.) with the types x = a, y = m b, and z = m c
(.) :: (m b -> m c) -> (a -> m b) -> (a -> m c)
Now flip the order of the inputs, and you get the desired compose function
compose :: (a -> m b) -> (m b -> m c) -> (a -> m c)
compose = flip (.)
Notice that we haven't even mentioned monads anywhere here. This works perfectly well for any type constructor m, whether it is a monad or not.
Now let's consider your other question. Suppose we want to write the following:
composeM :: (a -> m b) -> (b -> m c) -> (a -> m c)
Stop. Hoogle time. Hoogling for that type signature, we find there is an exact match! It is >=> from Control.Monad, but notice that for this function, m must be a monad.
Now the question is why. What makes this composition different from the other one such that this one requires m to be a Monad, while the other does not? Well, the answer to that question lies at the heart of understanding what the Monad abstraction is all about, so I'll leave a more detailed answer to the various internet resources that speak about the subject. Suffice it to say that there is no way to write composeM without knowing something about m. Go ahead, try it. You just can't write it without some additional knowledge about what m is, and the additional knowledge necessary to write this function just happens to be that m has the structure of a Monad.
Let me paraphrase your question a little bit:
why can't don't we use functions of type g :: m a -> m b with Monads?
The answer is, we do already, with Functors. There's nothing especially "monadic" about fmap f :: Functor m => m a -> m b where f :: a -> b. Monads are Functors; we get such functions just by using good old fmap:
class Functor f a where
fmap :: (a -> b) -> f a -> f b
If you have two functions f :: m a -> m b and a monadic value x :: m a, you can simply apply f x. You don't need any special monadic operator for that, just function application. But a function such as f can never "see" a value of type a.
Monadic composition of functions is much stronger concept and functions of type a -> m b are the core of monadic computations. If you have a monadic value x :: m a, you cannot "get into it" to retrieve some value of type a. But, if you have a function f :: a -> m b that operates on values of type a, you can compose the value with the function using >>= to get x >>= f :: m b. The point is, f "sees" a value of type a and can work with it (but it cannot return it, it can only return another monadic value). This is the benefit of >>= and each monad is required to provide its proper implementation.
To compare the two concepts:
If you have g :: m a -> m b, you can compose it with return to get g . return :: a -> m b (and then work with >>=), but
not vice versa. In general there is no way of creating a function of type m a -> m b from a function of type a -> m b.
So composing functions of types like a -> m b is a strictly stronger concept than composing functions of types like m a -> m b.
For example: The list monad represents computations that can give a variable number of answers, including 0 answers (you can view it as non-deterministic computations). The key elements of computing within list monad are functions of type a -> [b]. They take some input and produce a variable number of answers. Composition of these functions takes the results from the first one, applies the second function to each of the results, and merges it into a single list of all possible answers.
Functions of type [a] -> [b] would be different: They'd represent computations that take multiple inputs and produce multiple answers. They can be combined too, but we get something less strong than the original concept.
Perhaps even more distinctive example is the IO monad. If you call getChar :: IO Char and used only functions of type IO a -> IO b, you'd never be able to work with the character that was read. But >>= allows you to combine such a value with a function of type a -> IO b that can "see" the character and do something with it.
As others have pointed out, there is nothing that restricts a function to take a monadic value as argument. The bind function itself takes one, but not the function that is given to bind.
I think you can make this understandable to yourself with the "Monad is a Container" metaphor. A good example for this is Maybe. While we know how to unwrap a value from the Maybe conatiner, we do not know it for every monad, and in some monads (like IO) it is entirely impossible.
The idea is now that the Monad does this behind the scenes in a way you don't have to know about. For example, you indeed need to work with a value that was returned in the IO monad, but you cannot unwrap it, hence the function that does this needs to be in the IO monad itself.
I like to think of a monad as a recipe for constructing a program with a specific context. The power that a monad provides is the ability to, at any stage within your constructed program, branch depending upon the previous value. The usual >>= function was chosen as being the most generally useful interface to this branching ability.
As an example, the Maybe monad provides a program that may fail at some stage (the context is the failure state). Consider this psuedo-Haskell example:
-- take a computation that produces an Int. If the current Int is even, add 1.
incrIfEven :: Monad m => m Int -> m Int
incrIfEven anInt =
let ourInt = currentStateOf anInt
in if even ourInt then return (ourInt+1) else return ourInt
In order to branch based on the current result of a computation, we need to be able to access that current result. The above psuedo-code would work if we had access to currentStateOf :: m a -> a, but that isn't generally possible with monads. Instead we write our decision to branch as a function of type a -> m b. Since the a isn't in a monad in this function, we can treat it like a regular value, which is much easier to work with.
incrIfEvenReal :: Monad m => m Int -> m Int
incrIfEvenReal anInt = anInt >>= branch
where branch ourInt = if even ourInt then return (ourInt+1) else return ourInt
So the type of >>= is really for ease of programming, but there are a few alternatives that are sometimes more useful. Notably the function Control.Monad.join, which when combined with fmap gives exactly the same power as >>= (either can be defined in terms of the other).
The reason (>>=)'s second argument does not take a monad as input is because there is no need to bind such a function at all. Just apply it:
m :: m a
f :: a -> m b
g :: m b -> m c
h :: c -> m b
(g (m >>= f)) >>= h
You don't need (>>=) for g at all.
The function can take a monadic value if it wants. But it is not forced to do so.
Consider the following contrived definitions, using the list monad and functions from Data.Char:
m :: [[Int]]
m = [[71,72,73], [107,106,105,104]]
f :: [Int] -> [Char]
f mx = do
g <- [toUpper, id, toLower]
x <- mx
return (g $ chr x)
You can certainly run m >>= f; the result will have type [Char].
(It's important here that m :: [[Int]] and not m :: [Int]. >>= always "strips off" one monadic layer from its first argument. If you don't want that to happen, do f m instead of m >>= f.)
As others have mentioned, nothing restricts such functions from being written.
There is, in fact, a large family of functions of type :: m a -> (m a -> m b) -> m b:
f :: Monad m => Int -> m a -> (m a -> m b) -> m b
f n m mf = replicateM_ n m >>= mf m
where
f 0 m mf = mf m
f 1 m mf = m >> mf m
f 2 m mf = m >> m >> mf m
... etc. ...
(Note the base case: when n is 0, it's simply normal functional application.)
But what does this function do? It performs a monadic action multiple times, finally throwing away all the results, and returning the application of mf to m.
Useful sometimes, but hardly generally useful, especially compared to >>=.
A quick Hoogle search doesn't turn up any results; perhaps a telling result.

Resources