'join' for Applicatives? - haskell

I'm looking for the following function:
Applicative f => f (f a) -> f a
Hoogle shows me join:
>:t join
join :: Monad m => m (m a) -> m a
Is there a function that matches my desired signature?

To expand a bit on Carl's answer, If there was such a thing as join, but for applicatives:
class Applicative f => ApplicativeWithJoin f where
join' :: f (f a) -> f a
Then you would automatically have a monad:
instance ApplicativeWithJoin m => Monad m where
return = pure
x >>= f = join' (f <$> x)

There is no such function. join is explicitly what Applicative lacks and Monad has.

To expand on SingleNegationElimination's answer:
Applicative's <*> allows you to combine effects together and their values inside together, or manipulate a value inside using <$>, but you can't make an effect depend on a value of a previous computation.
Monads, on the other hand, allow an effect to be determined by the result of the previous computation, as witnessed by >>=.
With any Applicative you can first use <$> to convert a value of type a inside f a into some f b, so you'll get f (f b). But without join, the inner f b is just another value, there is no way how you can combine it with the outer one that is actually executed. Adding join makes it possible, allowing to define the full power of Monad.

Related

What is the 'minimum' needed to make an Applicative a Monad?

The Monad typeclass can be defined in terms of return and (>>=). However, if we already have a Functor instance for some type constructor f, then this definition is sort of 'more than we need' in that (>>=) and return could be used to implement fmap so we're not making use of the Functor instance we assumed.
In contrast, defining return and join seems like a more 'minimal'/less redundant way to make f a Monad. This way, the Functor constraint is essential because fmap cannot be written in terms of these operations. (Note join is not necessarily the only minimal way to go from Functor to Monad: I think (>=>) works as well.)
Similarly, Applicative can be defined in terms of pure and (<*>), but this definition again does not take advantage of the Functor constraint since these operations are enough to define fmap.
However, Applicative f can also be defined using unit :: f () and (>*<) :: f a -> f b -> f (a, b). These operations are not enough to define fmap so I would say in some sense this is a more minimal way to go from Functor to Applicative.
Is there a characterization of Monad as fmap, unit, (>*<), and some other operator which is minimal in that none of these functions can be derived from the others?
(>>=) does not work, since it can implement a >*< b = a >>= (\ x -> b >>= \ y -> pure (x, y)) where pure x = fmap (const x) unit.
Nor does join since m >>= k = join (fmap k m) so (>*<) can be implemented as above.
I suspect (>=>) fails similarly.
I have something, I think. It's far from elegant, but maybe it's enough to get you unstuck, at least. I started with join :: m (m a) -> ??? and asked "what could it produce that would require (<*>) to get back to m a?", which I found a fruitful line of thought that probably has more spoils.
If you introduce a new type T which can only be constructed inside the monad:
t :: m T
Then you could define a join-like operation which requires such a T:
joinT :: m (m a) -> m (T -> a)
The only way we can produce the T we need to get to the sweet, sweet a inside is by using t, and then we have to combine that with the result of joinT somehow. There are two basic operations that can combine two ms into one: (<*>) and joinT -- fmap is no help. joinT is not going to work, because we'll just need yet another T to use its result, so (<*>) is the only option, meaning that (<*>) can't be defined in terms of joinT.
You could roll that all up into an existential, if you prefer.
joinT :: (forall t. m t -> (m (m a) -> m (t -> a)) -> r) -> r

What is the relationship between bind and join?

I got the impression that (>>=) (used by Haskell) and join (preferred by mathematicians) are "equal" since one can write one in terms of the other:
import Control.Monad (join)
join x = x >>= id
x >>= f = join (fmap f x)
Additionally every monad is a functor since bind can be used to replace fmap:
fmap f x = x >>= (return . f)
I have the following questions:
Is there a (non-recursive) definition of fmap in terms of join? (fmap f x = join $ fmap (return . f) x follows from the equations above but is recursive.)
Is "every monad is a functor" a conclusion when using bind (in the definition of a monad), but an assumption when using join?
Is bind more "powerful" than join? And what would "more powerful" mean?
A monad can be either defined in terms of:
return :: a -> m a
bind :: m a -> (a -> m b) -> m b
or alternatively in terms of:
return :: a -> m a
fmap :: (a -> b) -> m a -> m b
join :: m (m a) -> m a
To your questions:
No, we cannot define fmap in terms of join, since otherwise we could remove fmap from the second list above.
No, "every monad is a functor" is a statement about monads in general, regardless whether you define your specific monad in terms of bind or in terms of join and fmap. It is easier to understand the statement if you see the second definition, but that's it.
Yes, bind is more "powerful" than join. It is exactly as "powerful" as join and fmap combined, if you mean with "powerful" that it has the capacity to define a monad (always in combination with return).
For an intuition, see e.g. this answer – bind allows you to combine or chain strategies/plans/computations (that are in a context) together. As an example, let's use the Maybe context (or Maybe monad):
λ: let plusOne x = Just (x + 1)
λ: Just 3 >>= plusOne
Just 4
fmap also let's you chain computations in a context together, but at the cost of increasing the nesting with every step.[1]
λ: fmap plusOne (Just 3)
Just (Just 4)
That's why you need join: to squash two levels of nesting into one. Remember:
join :: m (m a) -> m a
Having only the squashing step doesn't get you very far. You need also fmap to have a monad – and return, which is Just in the example above.
[1]: fmap and (>>=) don't take their two arguments in the same order, but don't let that confuse you.
Is there a [definition] of fmap in terms of join?
No, there isn't. That can be demonstrated by attempting to do it. Suppose we are given an arbitrary type constructor T, and functions:
returnT :: a -> T a
joinT :: T (T a) -> T a
From this data alone, we want to define:
fmapT :: (a -> b) -> T a -> T b
So let's sketch it:
fmapT :: (a -> b) -> T a -> T b
fmapT f ta = tb
where
tb = undefined -- tb :: T b
We need to get a value of type T b somehow. ta :: T a on its own won't do, so we need functions that produce T b values. The only two candidates are joinT and returnT. joinT doesn't help:
fmapT :: (a -> b) -> T a -> T b
fmapT f ta = joinT ttb
where
ttb = undefined -- ttb :: T (T b)
It just kicks the can down the road, as needing a T (T b) value under these circumstances is no improvement.
We might try returnT instead:
fmapT :: (a -> b) -> T a -> T b
fmapT f ta = returnT b
where
b = undefined -- b :: b
Now we need a b value. The only thing that can give us one is f:
fmapT :: (a -> b) -> T a -> T b
fmapT f ta = returnT (f a)
where
a = undefined -- a :: a
And now we are stuck: nothing can give us an a. We have exhausted all available possibilities, so fmapT cannot be defined in such terms.
A digression: it wouldn't suffice to cheat by using a function like this:
extractT :: T a -> a
With an extractT, we might try a = extractT ta, leading to:
fmapT :: (a -> b) -> T a -> T b
fmapT f ta = returnT (f (extractT ta))
It is not enough, however, for fmapT to have the right type: it must also follow the functor laws. In particular, fmapT id = id should hold. With this definition, fmapT id is returnT . extractT, which, in general, is not id (most functors which are instances of both Monad and Comonad serve as examples).
Is "every monad is a functor" a conclusion when using bind (in the definition of a monad), but an assumption when using join?
"Every monad is a functor" is an assumption, or, more precisely, part of the definition of monad. To pick an arbitrary illustration, here is Emily Riehl, Category Theory In Context, p. 154:
Definition 5.1.1. A monad on a category C consists of
an endofunctor T : C → C,
a unit natural transformation η : 1C ⇒ T, and
a multiplication natural transformation μ :T2 ⇒ T,
so that the following diagrams commute in CC: [diagrams of the monad laws]
A monad, therefore, involves an endofunctor by definition. For a Haskell type constructor T that instantiates Monad, the object mapping of that endofunctor is T itself, and the morphism mapping is its fmap. That T will be a Functor instance, and therefore will have an fmap, is, in contemporary Haskell, guaranteed by Applicative (and, by extension, Functor) being a superclass of Monad.
Is that the whole story, though? As far as Haskell is concerned. we know that liftM exists, and also that in a not-so-distant past Functor was not a superclass of Monad. Are those two facts mere Haskellisms? Not quite. In the classic paper Notions of computation and monads, Eugenio Moggi unearths the following definition (p. 3):
Definition 1.2 ([Man76]) A Kleisli triple over a category C is a triple (T, η, _*), where T : Obj(C) → Obj(C), ηA : A → T A for A ∈ Obj(C), f* : T A → T B for f : A → T B and the following equations hold:
ηA* = idT A
ηA; f* = f   for   f : A → T B
f*; g* = (f; g*)*   for   f : A → T B   and   g : B → T C
The important detail here is that T is presented as merely an object mapping in the category C, and not as an endofunctor in C. Working in the Hask category, that amounts to taking a type constructor T without presupposing it is a Functor instance. In code, we might write that as:
class KleisliTriple t where
return :: a -> t a
(=<<) :: (a -> t b) -> t a -> t b
-- (return =<<) = id
-- (f =<<) . return = f
-- (g =<<) . (f =<<) = ((g =<<) . f =<<)
Flipped bind aside, that is the pre-AMP definition of Monad in Haskell. Unsurprisingly, Moggi's paper doesn't take long to show that "there is a one-to-one correspondence between Kleisli triples and monads" (p. 5), establishing along the way that T can be extended to an endofunctor (in Haskell, that step amounts to defining the morphism mapping liftM f m = return . f =<< m, and then showing it follows the functor laws).
All in all, if you write lawful definitions of return and (>>=) without presupposing fmap, you indeed get a lawful implementation of Functor as a consequence. "There is a one-to-one correspondence between Kleisli triples and monads" is a consequence of the definition of Kleisli triple, while "a monad involves an endofunctor" is part of the definition of monad. It is tempting to consider whether it would be more accurate to describe what Haskellers did when writing Monad instances as "setting up a Klesili triple" rather than "setting up a monad", but I will refrain out of fear of getting mired down terminological pedantry -- and in any case, now that Functor is a superclass of Monad there is no practical reason to worry about that.
Is bind more "powerful" than join? And what would "more powerful" mean?
Trick question!
Taken at face value, the answer would be yes, to the extent that, together with return, (>>=) makes it possible to implement fmap (via liftM, as noted above), while join doesn't. However, I don't feel it is worthwhile to insist on this distinction. Why so? Because of the monad laws. Just like it doesn't make sense to talk about a lawful (>>=) without presupposing return, it doesn't make sense to talk about a lawful join without pressuposing return and fmap.
One might get the impression that I am giving too much weight to the laws by using them to tie Monad and Functor in this way. It is true that there are cases of laws that involve two classes, and that only apply to types which instantiate them both. Foldable provides a good example of that: we can find the following law in the Traversable documentation:
The superclass instances should satisfy the following: [...]
In the Foldable instance, foldMap should be equivalent to traversal with a constant applicative functor (foldMapDefault).
That this specific law doesn't always apply is not a problem, because we don't need it to characterise what Foldable is (alternatives include "a Foldable is a container from which we can extract some sequence of elements", and "a Foldable is a container that can be converted to the free monoid on its element type"). With the monad laws, though, it isn't like that: the meaning of the class is inextricably bound to all three of the monad laws.

Is there any intuition to understand join two functions in Monad?

join is defined along with bind to flatten the combined data structure into single structure.
From type system view, (+) 7 :: Num a => a -> a could be considered as a Functor, (+) :: Num a => a -> a -> a could be considered as a Functor of Functor, how to get some intuition about it instead of just relying on type system? Why join (+) 7 === 14?
Even though it is possible to get the final result through manually stepping by the function binding process, it would be great if some intuition were given.
This is from the NICTA exercises.
-- | Binds a function on the reader ((->) t).
--
-- >>> ((*) =<< (+10)) 7
-- 119
instance Bind ((->) t) where
(=<<) ::
(a -> ((->) t b))
-> ((->) t a)
-> ((->) t b)
(f =<< a) t =
f (a t) t
-- | Flattens a combined structure to a single structure.
--
-- >>> join (+) 7
-- 14
join ::
Bind f =>
f (f a)
-> f a
join f =
id =<< f
*Course.State> :t join (+)
join (+) :: Num a => a -> a
*Course.State> :t join
join :: Bind f => f (f a) -> f a
*Course.State> :t (+)
(+) :: Num a => a -> a -> a
how to get some intuition about it instead of just relying on type system?
I'd rather say that relying on the type system is a great way to build a specific sort of intuition. The type of join is:
join :: Monad m => m (m a) -> m a
Specialised to (->) r, it becomes:
(r -> (r -> a)) -> (r -> a)
Now let's try to define join for functions:
-- join :: (r -> (r -> a)) -> (r -> a)
join f = -- etc.
We know the result must be a r -> a function:
join f = \x -> -- etc.
However, we do not know anything at all about what the r and a types are, and therefore we know nothing in particular about f :: r -> (r -> a) and x :: r. Our ignorance means there is literally just one thing we can do with them: passing x as an argument, both to f and to f x:
join f = \x -> f x x
Therefore, join for functions passes the same argument twice because that is the only possible implementation. Of course, that implementation is only a proper monadic join because it follows the monad laws:
join . fmap join = join . join
join . fmap return = id
join . return = id
Verifying that might be another nice exercise.
Going along with the traditional analogy of a monad as a context for computation, join is a method of combining contexts. Let's start with your example. join (+) 7. Using a function as a monad implies the reader monad. (+ 1) is a reader monad which takes the environment and adds one to it. Thus, (+) would be a reader monad within a reader monad. The outer reader monad takes the environment n and returns a reader of the form (n +), which will take a new environment. join simply combines the two environments so that you provide it once and it applies the given parameter twice. join (+) === \x -> (+) x x.
Now, more in general, let's look at some other examples. The Maybe monad represents potential failure. A value of Nothing is a failed computation, whereas a Just x is a success. A Maybe within a Maybe is a computation that could fail twice. A value of Just (Just x) is obviously a success, so joining that produces Just x. A Nothing or a Just Nothing indicates failure at some point, so joining the possible failure should indicate that the computation failed, i.e. Nothing.
A similar analogy can be made for the list monad, for which join is merely concat, the writer monad, which uses the monoidal operator <> to combine the output values in question, or any other monad.
join is a fundamental property of monads and is the operation that makes it significantly stronger than a functor or an applicative functor. Functors can be mapped over, applicatives can be sequences, monads can be combined. Categorically, a monad is often defined as join and return. It just so happens that in Haskell we find it more convenient to define it in terms of return, (>>=), and fmap, but the two definitions have been proven synonymous.
An intuition about join is that is squashes 2 containers into one. .e.g
join [[1]] => [1]
join (Just (Just 1)) => 1
join (a christmas tree decorated with small cristmas tree) => a cristmas tree
etc ...
Now, how can you join functions ? In fact functions, can be seen as a container.
If you look at a Hash table for example. You give a key and you get a value (or not). It's a function key -> value (or if you prefer key -> Maybe value).
So how would you join 2 HashMap ?
Let's say I have (in python style) h={"a": {"a": 1, "b": 2}, "b" : {"a" : 10, "b" : 20 }} how can I join it, or if you prefer flatten it ?
Given "a" which value should I get ? h["a"] gives me {"a":1, "b":2}. The only thing I can do with it is to find "a" again in this new value, which gives me 1.
Therefore join h equals to {"a":1, "b":20}.
It's the same for a function.

Difference between Monad and Applicative in Haskell

I just read the following from typeclassopedia about the difference between Monad and Applicative. I can understand that there is no join in Applicative. But the following description looks vague to me and I couldn't figure out what exactly is meant by "the result" of a monadic computation/action. So, if I put a value into Maybe, which makes a monad, what is the result of this "computation"?
Let’s look more closely at the type of (>>=). The basic intuition is
that it combines two computations into one larger computation. The
first argument, m a, is the first computation. However, it would be
boring if the second argument were just an m b; then there would be no
way for the computations to interact with one another (actually, this
is exactly the situation with Applicative). So, the second argument to
(>>=) has type a -> m b: a function of this type, given a result of
the first computation, can produce a second computation to be run.
... Intuitively, it is this ability to use the output from previous
computations to decide what computations to run next that makes Monad
more powerful than Applicative. The structure of an Applicative
computation is fixed, whereas the structure of a Monad computation can
change based on intermediate results.
Is there a concrete example illustrating "ability to use the output from previous computations to decide what computations to run next", which Applicative does not have?
My favorite example is the "purely applicative Either". We'll start by analyzing the base Monad instance for Either
instance Monad (Either e) where
return = Right
Left e >>= _ = Left e
Right a >>= f = f a
This instance embeds a very natural short-circuiting notion: we proceed from left to right and once a single computation "fails" into the Left then all the rest do as well. There's also the natural Applicative instance that any Monad has
instance Applicative (Either e) where
pure = return
(<*>) = ap
where ap is nothing more than left-to-right sequencing before a return:
ap :: Monad m => m (a -> b) -> m a -> m b
ap mf ma = do
f <- mf
a <- ma
return (f a)
Now the trouble with this Either instance comes to light when you'd like to collect error messages which occur anywhere in a computation and somehow produce a summary of errors. This flies in the face of short-circuiting. It also flies in the face of the type of (>>=)
(>>=) :: m a -> (a -> m b) -> m b
If we think of m a as "the past" and m b as "the future" then (>>=) produces the future from the past so long as it can run the "stepper" (a -> m b). This "stepper" demands that the value of a really exists in the future... and this is impossible for Either. Therefore (>>=) demands short-circuiting.
So instead we'll implement an Applicative instance which cannot have a corresponding Monad.
instance Monoid e => Applicative (Either e) where
pure = Right
Now the implementation of (<*>) is the special part worth considering carefully. It performs some amount of "short-circuiting" in its first 3 cases, but does something interesting in the fourth.
Right f <*> Right a = Right (f a) -- neutral
Left e <*> Right _ = Left e -- short-circuit
Right _ <*> Left e = Left e -- short-circuit
Left e1 <*> Left e2 = Left (e1 <> e2) -- combine!
Notice again that if we think of the left argument as "the past" and the right argument as "the future" then (<*>) is special compared to (>>=) as it's allowed to "open up" the future and the past in parallel instead of necessarily needing results from "the past" in order to compute "the future".
This means, directly, that we can use our purely Applicative Either to collect errors, ignoring Rights if any Lefts exist in the chain
> Right (+1) <*> Left [1] <*> Left [2]
> Left [1,2]
So let's flip this intuition on its head. What can we not do with a purely applicative Either? Well, since its operation depends upon examining the future prior to running the past, we must be able to determine the structure of the future without depending upon values in the past. In other words, we cannot write
ifA :: Applicative f => f Bool -> f a -> f a -> f a
which satisfies the following equations
ifA (pure True) t e == t
ifA (pure False) t e == e
while we can write ifM
ifM :: Monad m => m Bool -> m a -> m a -> m a
ifM mbool th el = do
bool <- mbool
if bool then th else el
such that
ifM (return True) t e == t
ifM (return False) t e == e
This impossibility arises because ifA embodies exactly the idea of the result computation depending upon the values embedded in the argument computations.
Just 1 describes a "computation", whose "result" is 1. Nothing describes a computation which produces no results.
The difference between a Monad and an Applicative is that in the Monad there's a choice. The key distinction of Monads is the ability to choose between different paths in computation (not just break out early). Depending on a value produced by a previous step in computation, the rest of computation structure can change.
Here's what this means. In the monadic chain
return 42 >>= (\x ->
if x == 1
then
return (x+1)
else
return (x-1) >>= (\y ->
return (1/y) ))
the if chooses what computation to construct.
In case of Applicative, in
pure (1/) <*> ( pure (+(-1)) <*> pure 1 )
all the functions work "inside" computations, there's no chance to break up a chain. Each function just transforms a value it's fed. The "shape" of the computation structure is entirely "on the outside" from the functions' point of view.
A function could return a special value to indicate failure, but it can't cause next steps in the computation to be skipped. They all will have to process the special value in a special way too. The shape of the computation can not be changed according to received value.
With monads, the functions themselves construct computations to their choosing.
Here is my take on #J. Abrahamson's example as to why ifA cannot use the value inside e.g. (pure True). In essence, it still boils down to the absence of the join function from Monad in Applicative, which unifies the two different perspectives given in typeclassopedia to explain the difference between Monad and Applicative.
So using #J. Abrahamson's example of purely applicative Either:
instance Monoid e => Applicative (Either e) where
pure = Right
Right f <*> Right a = Right (f a) -- neutral
Left e <*> Right _ = Left e -- short-circuit
Right _ <*> Left e = Left e -- short-circuit
Left e1 <*> Left e2 = Left (e1 <> e2) -- combine!
(which has similar short-circuiting effect to the Either Monad), and the ifA function
ifA :: Applicative f => f Bool -> f a -> f a -> f a
What if we try to achieve the mentioned equations:
ifA (pure True) t e == t
ifA (pure False) t e == e
?
Well, as already pointed out, ultimately, the content of (pure True), cannot be used by a later computation. But technically speaking, this isn't right. We can use the content of (pure True) since a Monad is also a Functor with fmap. We can do:
ifA' b t e = fmap (\x -> if x then t else e) b
The problem is with the return type of ifA', which is f (f a). In Applicative, there is no way of collapsing two nested ApplicativeS into one. But this collapsing function is precisely what join in Monad performs. So,
ifA = join . ifA'
will satisfy the equations for ifA, if we can implement join appropriately. What Applicative is missing here is exactly the join function. In other words, we can somehow use the result from the previous result in Applicative. But doing so in an Applicative framework will involve augmenting the type of the return value to a nested applicative value, which we have no means to bring back to a single-level applicative value. This will be a serious problem because, e.g., we cannot compose functions using ApplicativeS appropriately. Using join fixes the issue, but the very introduction of join promotes the Applicative to a Monad.
The key of the difference can be observed in the type of ap vs type of =<<.
ap :: m (a -> b) -> (m a -> m b)
=<< :: (a -> m b) -> (m a -> m b)
In both cases there is m a, but only in the second case m a can decide whether the function (a -> m b) gets applied. In its turn, the function (a -> m b) can "decide" whether the function bound next gets applied - by producing such m b that does not "contain" b (like [], Nothing or Left).
In Applicative there is no way for functions "inside" m (a -> b) to make such "decisions" - they always produce a value of type b.
f 1 = Nothing -- here f "decides" to produce Nothing
f x = Just x
Just 1 >>= f >>= g -- g doesn't get applied, because f decided so.
In Applicative this is not possible, so can't show a example. The closest is:
f 1 = 0
f x = x
g <$> f <$> Just 1 -- oh well, this will produce Just 0, but can't stop g
-- from getting applied
But the following description looks vague to me and I couldn't figure out what exactly is meant by "the result" of a monadic computation/action.
Well, that vagueness is somewhat deliberate, because what "the result" is of a monadic computation is something that depends on each type. The best answer is a bit tautological: the "result" (or results, since there can be more than one) is whatever value(s) the instance's implementation of (>>=) :: Monad m => m a -> (a -> m b) -> m b invokes the function argument with.
So, if I put a value into Maybe, which makes a monad, what is the result of this "computation"?
The Maybe monad looks like this:
instance Monad Maybe where
return = Just
Nothing >>= _ = Nothing
Just a >>= k = k a
The only thing in here that qualifies as a "result" is the a in the second equation for >>=, because it's the only thing that ever gets "fed" to the second argument of >>=.
Other answers have gone into depth about the ifA vs. ifM difference, so I thought I'd highlight another significant difference: applicatives compose, monads don't. With Monads, if you want to make a Monad that combines the effects of two existing ones, you have to rewrite one of them as a monad transformer. In contrast, if you have two Applicatives you can easily make a more complex one out of them, as shown below. (Code is copypasted from transformers.)
-- | The composition of two functors.
newtype Compose f g a = Compose { getCompose :: f (g a) }
-- | The composition of two functors is also a functor.
instance (Functor f, Functor g) => Functor (Compose f g) where
fmap f (Compose x) = Compose (fmap (fmap f) x)
-- | The composition of two applicatives is also an applicative.
instance (Applicative f, Applicative g) => Applicative (Compose f g) where
pure x = Compose (pure (pure x))
Compose f <*> Compose x = Compose ((<*>) <$> f <*> x)
-- | The product of two functors.
data Product f g a = Pair (f a) (g a)
-- | The product of two functors is also a functor.
instance (Functor f, Functor g) => Functor (Product f g) where
fmap f (Pair x y) = Pair (fmap f x) (fmap f y)
-- | The product of two applicatives is also an applicative.
instance (Applicative f, Applicative g) => Applicative (Product f g) where
pure x = Pair (pure x) (pure x)
Pair f g <*> Pair x y = Pair (f <*> x) (g <*> y)
-- | The sum of a functor #f# with the 'Identity' functor
data Lift f a = Pure a | Other (f a)
-- | The sum of two functors is always a functor.
instance (Functor f) => Functor (Lift f) where
fmap f (Pure x) = Pure (f x)
fmap f (Other y) = Other (fmap f y)
-- | The sum of any applicative with 'Identity' is also an applicative
instance (Applicative f) => Applicative (Lift f) where
pure = Pure
Pure f <*> Pure x = Pure (f x)
Pure f <*> Other y = Other (f <$> y)
Other f <*> Pure x = Other (($ x) <$> f)
Other f <*> Other y = Other (f <*> y)
Now, if we add in the Constant functor/applicative:
newtype Constant a b = Constant { getConstant :: a }
instance Functor (Constant a) where
fmap f (Constant x) = Constant x
instance (Monoid a) => Applicative (Constant a) where
pure _ = Constant mempty
Constant x <*> Constant y = Constant (x `mappend` y)
...we can assemble the "applicative Either" from the other responses out of Lift and Constant:
type Error e a = Lift (Constant e) a
As #Will Ness explains in his answer, the key difference is that with Monads there's a choice between different executions paths at every step. Let's make this potential choice syntactically visible by implementing a function for sequencing four times. First for applicative f, and then for monad m:
seq4A :: Applicative f => f a -> f [a]
seq4A f =
f <**> (
f <**> (
f <**> (
f <&> (\a1 a2 a3 a4 ->
[a1, a2, a3, a4]))))
seq4M :: Monad m => m a -> m [a]
seq4M m =
m >>= (\a1 ->
m >>= (\a2 ->
m >>= (\a3 ->
m >>= (\a4 ->
return [a1, a2, a3, a4]))))
The seq4M function has the values resulting from the monadic action available at every step and could thus make a choice at every step. On the other hand the seq4A function only has the values available at the very end.
I would like to share my view on this "iffy miffy" thing, as I understand this everything inside the context get applied, so for example:
iffy :: Applicative f => f Bool -> f a -> f a -> f a
iffy fb ft fe = cond <$> fb <*> ft <*> fe where
cond b t e = if b then t else e
case 1>> iffy (Just True) (Just “True”) Nothing ->> Nothing
upps should be Just "True" ... but
case 2>> iffy (Just False) (Just “True”) (Just "False") ->> Just "False"
(the "good" choice is made inside the context)
I explained this to myself this way, just before the end of the computation in case >>1 we get something like that in the "chain" :
Just (Cond True "True") <*> something [something being "accidentaly" Nothing]
which according by definition of Applicative is evaluated as:
fmap (Cond True "True") something
which when "something" is Nothing becomes a Nothing according to Functor constraint (fmap over Nothing gives Nothing). And it is not possible to define a Functor with "fmap f Nothing = something" end of story.

Monads - Definition, Laws and Example [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What is a monad?
I am learning to program in the functional language of Haskell and I came across Monads when studying parsers. I had never heard of them before and so I did some extra studying to find out what they are.
Everywhere I look in order to learn this topic just confuses me. I can't really find a simple definition of what a Monad is and how to use them. "A monad is a way to structure computations in terms of values and sequences of computations using those values" - eh???
Can someone please provide a simple definition of what a Monad is in Haskell, the laws associated with them and give an example?
Note: I know how to use the do syntax as I have had a look at I/O actions and functions with side-effects.
Intuition
A rough intuition would be that a Monad is a particular kind of container (Functor), for which you have two operations available. A wrapping operation return that takes a single element into a container. An operation join that merges a container of containers into a single container.
return :: Monad m => a -> m a
join :: Monad m => m (m a) -> m a
So for the Monad Maybe you have:
return :: a -> Maybe a
return x = Just x
join :: Maybe (Maybe a) -> Maybe a
join (Just (Just x) = Just x
join (Just Nothing) = Nothing
join Nothing = Nothing
Likewise for the Monad [ ] these operations are defined to be:
return :: a -> [a]
return x = [x]
join :: [[a]] -> [a]
join xs = concat xs
The standard mathematical definition of Monad is based on these return and join operators. However in Haskell the definition of the class Monad substitutes a bind operator for join.
Monads in Haskell
In functional programming languages these special containers are typically used to denote effectful computations. The type Maybe a would represent a computation that may or may not succeed, and the type [a] a computation that is non-deterministic. Particularly we're interested in functions with effects, i.e.those with types a->m b for some Monad m. And we need to be able to compose them. This can be done using either a monadic composition or bind operator.
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c
(>>=) :: Monad m => m a -> (a -> m b) -> m b
In Haskell the latter is the standard one. Note that its type is very similar to the type of the application operator (but with flipped arguments):
(>>=) :: Monad m => m a -> (a -> m b) -> m b
flip ($) :: a -> (a -> b) -> b
It takes an effectful function f :: a -> m b and a computation mx :: m a returning values of type a, and performs the application mx >>= f. So how do we do this with Monads? Containers (Functors) can be mapped, and in this case the result is a computation within a computation which can then be flattened:
fmap f mx :: m (m b)
join (fmap f mx) :: m b
So we have:
(mx >>= f) = join (fmap f mx) :: m b
To see this working in practise consider a simple example with lists (non-deterministic functions). Suppose you have a list of possible results mx = [1,2,3] and a non-deterministic function f x = [x-1, x*2]. To calculate mx >>= f you begin by mapping mx with f and then you merge the results::
fmap f mx = [[0,2],[1,4],[2,6]]
join [[0,2],[1,4],[2,6]] = [0,2,1,4,2,6]
Since in Haskell the bind operator (>>=) is more important than join, for efficiency reasons in the latter is defined from the former and not the other way around.
join mx = mx >>= id
Also the bind operator, being defined with join and fmap, can also be used to define a mapping operation. For this reason Monads are not required to be instances of the class Functor. The equivalent operation to fmap is called liftM in the Monad library.
liftM f mx = mx >>= \x-> return (f x)
So the actual definitions for the Monads Maybe becomes:
return :: a -> Maybe a
return x = Just x
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
Nothing >>= f = Nothing
Just x >>= f = f x
And for the Monad [ ]:
return :: a -> [a]
return x = [x]
(>>=) :: [a] -> (a -> [b]) -> [b]
xs >>= f = concat (map f xs)
= concatMap f xs -- same as above but more efficient
When designing your own Monads you may find it easier to, instead of trying to directly define (>>=), split the problem in parts and figure out what how to map and join your structures. Having map and join can also be useful to verify that your Monad is well defined, in the sense that it satisfy the required laws.
Monad Laws
Your Monad should be a Functor, so the mapping operation should satisfy:
fmap id = id
fmap g . fmap f = fmap (g . f)
The laws for return and join are:
join . return = id
join . fmap return = id
join . join = join . fmap join
The first two laws specify that merging undoes wrapping. If you wrap a container in another one, join gives you back the original. If you map the contents of a container with a wrapping operation, join again gives you back what you initially had. The last law is the associativity of join. If you have three layers of containers you get the same result by merging from the inside or the outside.
Again you can work with bind instead of join and fmap. You get fewer but (arguably) more complicated laws:
return a >>= f = f a
m >>= return = m
(m >>= f) >>= g = m >>= (\x -> f x >>= g)
A monad in Haskell is something that has two operations defined:
(>>=) :: Monad m => m a -> (a -> m b) -> m b -- also called bind
return :: Monad m => a -> m a
These two operations need to satisfy certain laws that really might just confuse you at this point, if you don't have a knack for mathy ideas. Conceptually, you use bind to operate on values on a monadic level and return to create monadic values from "trivial" ones. For instance,
getLine :: IO String,
so you cannot modify and putStrLn this String -- because it's not a String but an IO String!
Well, we have an IO Monad handy, so not to worry. All we have to do is use bind to do what we want. Let's see what bind looks like in the IO Monad:
(>>=) :: IO a -> (a -> IO b) -> IO b
And if we place getLine at the left hand side of bind, we can make it more specific yet.
(>>=) :: IO String -> (String -> IO b) -> IO b
Okay, so getLine >>= putStrLn . (++ ". No problem after all!") would print the entered line with the extra content added. The right hand side is a function that takes a String and produces an IO () - that wasn't hard at all! We just go by the types.
There are Monads defined for a lot of different types, for instance Maybe and [a], and they behave conceptually in the same way.
Just 2 >>= return . (+2) would yield Just 4, as you might expect. Note that we had to use return here, because otherwise the function on the right hand side would not match the return type m b, but just b, which would be a type error. It worked in the case of putStrLn because it already produces an IO something, which was exactly what our type needed to match. (Spoiler: Expressions of shape foo >>= return . bar are silly, because every Monad is a Functor. Can you figure out what that means?)
I personally think that this is as far as intuition will get you on the topic of monads, and if you want to dive deeper, you really do need to dive into the theory. I liked getting a hang of just using them first. You can look up the source for various Monad instances, for instance the List ([]) Monad or Maybe Monad on Hoogle and get a bit smarter on the exact implementations. Once you feel comfortable with that, have a go at the actual monad laws and try to gain a more theoretical understanding for them!
The Typeclassopedia has a section about Monad (but do read the preceding sections about Functor and Applicative first).

Resources