Unlike a Functor, a Monad can change shape? - haskell

I've always enjoyed the following intuitive explanation of a monad's power relative to a functor: a monad can change shape; a functor cannot.
For example: length $ fmap f [1,2,3] always equals 3.
With a monad, however, length $ [1,2,3] >>= g will often not equal 3. For example, if g is defined as:
g :: (Num a) => a -> [a]
g x = if x==2 then [] else [x]
then [1,2,3] >>= g is equal to [1,3].
The thing that troubles me slightly, is the type signature of g. It seems impossible to define a function which changes the shape of the input, with a generic monadic type such as:
h :: (Monad m, Num a) => a -> m a
The MonadPlus or MonadZero type classes have relevant zero elements, to use instead of [], but now we have something more than a monad.
Am I correct? If so, is there a way to express this subtlety to a newcomer to Haskell. I'd like to make my beloved "monads can change shape" phrase, just a touch more honest; if need be.

I've always enjoyed the following intuitive explanation of a monad's power relative to a functor: a monad can change shape; a functor cannot.
You're missing a bit of subtlety here, by the way. For the sake of terminology, I'll divide a Functor in the Haskell sense into three parts: The parametric component determined by the type parameter and operated on by fmap, the unchanging parts such as the tuple constructor in State, and the "shape" as anything else, such as choices between constructors (e.g., Nothing vs. Just) or parts involving other type parameters (e.g., the environment in Reader).
A Functor alone is limited to mapping functions over the parametric portion, of course.
A Monad can create new "shapes" based on the values of the parametric portion, which allows much more than just changing shapes. Duplicating every element in a list or dropping the first five elements would change the shape, but filtering a list requires inspecting the elements.
This is essentially how Applicative fits between them--it allows you to combine the shapes and parametric values of two Functors independently, without letting the latter influence the former.
Am I correct? If so, is there a way to express this subtlety to a newcomer to Haskell. I'd like to make my beloved "monads can change shape" phrase, just a touch more honest; if need be.
Perhaps the subtlety you're looking for here is that you're not really "changing" anything. Nothing in a Monad lets you explicitly mess with the shape. What it lets you do is create new shapes based on each parametric value, and have those new shapes recombined into a new composite shape.
Thus, you'll always be limited by the available ways to create shapes. With a completely generic Monad all you have is return, which by definition creates whatever shape is necessary such that (>>= return) is the identity function. The definition of a Monad tells you what you can do, given certain kinds of functions; it doesn't provide those functions for you.

Monad's operations can "change the shape" of values to the extent that the >>= function replaces leaf nodes in the "tree" that is the original value with a new substructure derived from the node's value (for a suitably general notion of "tree" - in the list case, the "tree" is associative).
In your list example what is happening is that each number (leaf) is being replaced by the new list that results when g is applied to that number. The overall structure of the original list still can be seen if you know what you're looking for; the results of g are still there in order, they've just been smashed together so you can't tell where one ends and the next begins unless you already know.
A more enlightening point of view may be to consider fmap and join instead of >>=. Together with return, either way gives an equivalent definition of a monad. In the fmap/join view, though, what is happening here is more clear. Continuing with your list example, first g is fmapped over the list yielding [[1],[],[3]]. Then that list is joined, which for list is just concat.

Just because the monad pattern includes some particular instances that allow shape changes doesn't mean every instance can have shape changes. For example, there is only one "shape" available in the Identity monad:
newtype Identity a = Identity a
instance Monad Identity where
return = Identity
Identity a >>= f = f a
In fact, it's not clear to me that very many monads have meaningful "shape"s: for example, what does shape mean in the State, Reader, Writer, ST, STM, or IO monads?

The key combinator for monads is (>>=). Knowing that it composes two monadic values and reading its type signature, the power of monads becomes more apparent:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
The future action can depend entirely on the outcome of the first action, because it is a function of its result. This power comes at a price though: Functions in Haskell are entirely opaque, so there is no way for you to get any information about a composed action without actually running it. As a side note, this is where arrows come in.

A function with a signature like h indeed cannot do many interesting things beyond performing some arithmetic on its argument. So, you have the correct intuition there.
However, it might help to look at commonly used libraries for functions with similar signatures. You'll find that the most generic ones, as you'd expect, perform generic monad operations like return, liftM, or join. Also, when you use liftM or fmap to lift an ordinary function into a monadic function, you typically wind up with a similarly generic signature, and this is quite convenient for integrating pure functions with monadic code.
In order to use the structure that a particular monad offers, you inevitably need to use some knowledge about the specific monad you're in to build new and interesting computations in that monad. Consider the state monad, (s -> (a, s)). Without knowing that type, we can't write get = \s -> (s, s), but without being able to access the state, there's not much point to being in the monad.

The simplest type of a function satisfying the requirement I can imagine is this:
enigma :: Monad m => m () -> m ()
One can implement it in one of the following ways:
enigma1 m = m -- not changing the shape
enigma2 _ = return () -- changing the shape
This was a very simple change -- enigma2 just discards the shape and replaces it with the trivial one. Another kind of generic change is combining two shapes together:
foo :: Monad m => m () -> m () -> m ()
foo a b = a >> b
The result of foo can have shape different from both a and b.
A third obvious change of shape, requiring the full power of the monad, is a
join :: Monad m => m (m a) -> m a
join x = x >>= id
The shape of join x is usually not the same as of x itself.
Combining those primitive changes of shape, one can derive non-trivial things like sequence, foldM and alike.

Does
h :: (Monad m, Num a) => a -> m a
h 0 = fail "Failed."
h a = return a
suit your needs? For example,
> [0,1,2,3] >>= h
[1,2,3]

This isn't a full answer, but I have a few things to say about your question that don't really fit into a comment.
Firstly, Monad and Functor are typeclasses; they classify types. So it is odd to say that "a monad can change shape; a functor cannot." I believe what you are trying to talk about is a "Monadic value" or perhaps a "monadic action": a value whose type is m a for some Monad m of kind * -> * and some other type of kind *. I'm not entirely sure what to call Functor f :: f a, I suppose I'd call it a "value in a functor", though that's not the best description of, say, IO String (IO is a functor).
Secondly, note that all Monads are necessarily Functors (fmap = liftM), so I'd say the difference you observe is between fmap and >>=, or even between f and g, rather than between Monad and Functor.

Related

Extract a real or default value from a Monad with fmap or <$> or the likes -- how? [duplicate]

Is there a built-in function with signature :: (Monad m) => m a -> a ?
Hoogle tells that there is no such function.
Can you explain why?
A monad only supplies two functions:
return :: Monad m => a -> m a
(>>=) :: Monad m => m a -> (a -> m b) -> m b
Both of these return something of type m a, so there is no way to combine these in any way to get a function of type Monad m => m a -> a. To do that, you'll need more than these two functions, so you need to know more about m than that it's a monad.
For example, the Identity monad has runIdentity :: Identity a -> a, and several monads have similar functions, but there is no way to provide it generically. In fact, the inability to "escape" from the monad is essential for monads like IO.
There is probably a better answer than this, but one way to see why you cannot have a type (Monad m) => m a -> a is to consider a null monad:
data Null a = Null
instance Monad Null where
return a = Null
ma >>= f = Null
Now (Monad m) => m a -> a means Null a -> a, ie getting something out of nothing. You can't do that.
This doesn't exist because Monad is a pattern for composition, not a pattern for decomposition. You can always put more pieces together with the interface it defines. It doesn't say a thing about taking anything apart.
Asking why you can't take something out is like asking why Java's Iterator interface doesn't contain a method for adding elements to what it's iterating over. It's just not what the Iterator interface is for.
And your arguments about specific types having a kind of extract function follows in the exact same way. Some particular implementation of Iterator might have an add function. But since it's not what Iterators are for, the presence that method on some particular instance is irrelevant.
And the presence of fromJust is just as irrelevant. It's not part of the behavior Monad is intended to describe. Others have given lots of examples of types where there is no value for extract to work on. But those types still support the intended semantics of Monad. This is important. It means that Monad is a more general interface than you are giving it credit for.
Suppose there was such a function:
extract :: Monad m => m a -> a
Now you could write a "function" like this:
appendLine :: String -> String
appendLine str = str ++ extract getLine
Unless the extract function was guaranteed never to terminate, this would violate referential transparency, because the result of appendLine "foo" would (a) depend on something other than "foo", (b) evaluate to different values when evaluated in different contexts.
Or in simpler words, if there was an actually useful extract operation Haskell would not be purely functional.
Is there a build-in function with signature :: (Monad m) => m a -> a ?
If Hoogle says there isn't...then there probably isn't, assuming your definition of "built in" is "in the base libraries".
Hoogle tells that there is no such function. Can you explain why?
That's easy, because Hoogle didn't find any function in the base libraries that matches that type signature!
More seriously, I suppose you were asking for the monadic explanation. The issues are safety and meaning. (See also my previous thoughts on magicMonadUnwrap :: Monad m => m a -> a)
Suppose I tell you I have a value which has the type [Int]. Since we know that [] is a monad, this is similar to telling you I have a value which has the type Monad m => m Int. So let's suppose you want to get the Int out of that [Int]. Well, which Int do you want? The first one? The last one? What if the value I told you about is actually an empty list? In that case, there isn't even an Int to give you! So for lists, it is unsafe to try and extract a single value willy-nilly like that. Even when it is safe (a non-empty list), you need a list-specific function (for example, head) to clarify what you mean by desiring f :: [Int] -> Int. Hopefully you can intuit from here that the meaning of Monad m => m a -> a is simply not well defined. It could hold multiple meanings for the same monad, or it could mean absolutely nothing at all for some monads, and sometimes, it's just simply not safe.
Because it may make no sense (actually, does make no sense in many instances).
For example, I might define a Parser Monad like this:
data Parser a = Parser (String ->[(a, String)])
Now there is absolutely no sensible default way to get a String out of a Parser String. Actually, there is no way at all to get a String out of this with just the Monad.
There is a useful extract function and some other functions related to this at http://hackage.haskell.org/package/comonad-5.0.4/docs/Control-Comonad.html
It's only defined for some functors/monads and it doesn't necessarily give you the whole answer but rather gives an answer. Thus there will be possible subclasses of comonad that give you intermediate stages of picking the answer where you could control it. Probably related to the possible subclasses of Traversable. I don't know if such things are defined anywhere.
Why hoogle doesn't list this function at all appears to be because the comonad package isn't indexed otherwise I think the Monad constraint would be warned and extract would be in the results for those Monads with a Comonad instance. Perhaps this is because the hoogle parser is incomplete and fails on some lines of code.
My alternative answers:
you can perform a - possibly recursive - case analysis if you've imported the type's constructors
You can slink your code that would use the extracted values into the monad using monad >>= \a -> return $ your code uses a here as an alternative code structure and as long as you can convert the monad to "IO ()" in a way that prints your outputs you're done. This doesn't look like extraction but maths isn't the same as the real world.
Well, technicaly there is unsafePerformIO for the IO monad.
But, as the name itself suggests, this function is evil and you should only use it if you really know what you are doing (and if you have to ask wether you know or not then you don't)

What can Arrows do that Monads can't?

Arrows seem to be gaining popularity in the Haskell community, but it seems to me like Monads are more powerful. What is gained by using Arrows? Why can't Monads be used instead?
Every monad gives rise to an arrow
newtype Kleisli m a b = Kleisli (a -> m b)
instance Monad m => Category (Kleisli m) where
id = Kleisli return
(Kleisli f) . (Kleisli g) = Kleisli (\x -> (g x) >>= f)
instance Monad m => Arrow (Kleisli m) where
arr f = Kleisli (return . f)
first (Kleisli f) = Kleisli (\(a,b) -> (f a) >>= \fa -> return (fa,b))
But, there are arrows which are not monads. Thus, there are arrows which do things that you can't do with monads. A good example is the arrow transformer to add some static information
data StaticT m c a b = StaticT m (c a b)
instance (Category c, Monoid m) => Category (StaticT m c) where
id = StaticT mempty id
(StaticT m1 f) . (StaticT m2 g) = StaticT (m1 <> m2) (f . g)
instance (Arrow c, Monoid m) => Arrow (StaticT m c) where
arr f = StaticT mempty (arr f)
first (StaticT m f) = StaticT m (first f)
this arrow tranformer is usefull because it can be used to keep track of static properties of a program. For example, you can use this to instrument your API to statically measure how many calls you are making.
I've always found it difficult to think of the issue in these terms: what is gained by using arrows. As other commenters have mentioned, every monad can trivially be turned into an arrow. So a monad can do all the arrow-y things. However, we can make Arrows that are not monads. That is to say, we can make types that can do these arrow-y things without making them support monadic binding. It might not seem like the case, but the monadic bind function is actually a pretty restrictive (hence powerful) operation that disqualifies many types.
See, to support bind, you have to be able to assert that that regardless of the input type, what's going to come out is going to be wrapped in the monad.
(>>=) :: forall a b. m a -> (a -> m b) -> m b
But, how would we define bind for a type like data Foo a = F Bool a Surely, we could combine one Foo's a with another's but how would we combine the Bools. Imagine that the Bool marked, say, whether or not the value of the other parameter had changed. If I have a = Foo False whatever and I bind it into a function, I have no idea whether or not that function is going to change whatever. I can't write a bind that correctly sets the Bool. This is often called the problem of static meta-information. I cannot inspect the function being bound into to determine whether or not it will alter whatever.
There are several other cases like this: types that represent mutating functions, parsers that can exit early, etc. But the basic idea is this: monads set a high bar that not all types can clear. Arrows allow you to compose types (that may or may not be able to support this high, binding standard) in powerful ways without having to satisfy bind. Of course, you do lose some of the power of monads.
Moral of the story: there's nothing an arrow can do that monad cannot, because a monad can always be made into an arrow. However, sometimes you can't make your types into monads but you still want to allow them to have most of the compositional flexibility and power of monads.
Many of these ideas were inspired by the superb Understanding Haskell Arrows (backup)
Well, I'm going to cheat slightly here by changing the question from Arrow to Applicative. A lot of the same motives apply, and I know applicatives better than arrows. (And in fact, every Arrow is also an Applicative but not vice-versa, so I'm just taking it down a bit further down the slope to Functor.)
Just like every Monad is an Arrow, every Monad is also an Applicative. There are Applicatives that are not Monads (e.g., ZipList), so that's one possible answer.
But assume we're dealing with a type that admits of a Monad instance as well as an Applicative. Why might we sometime use the Applicative instance instead of Monad? Because Applicative is less powerful, and that comes with benefits:
There are things that we know that the Monad can do which the Applicative cannot. For example, if we use the Applicative instance of IO to assemble a compound action from simpler ones, none of the actions we compose may use the results of any of the others. All that applicative IO can do is execute the component actions and combine their results with pure functions.
Applicative types can be written so that we can do powerful static analysis of the actions before executing them. So you can write a program that inspects an Applicative action before executing it, figures out what it's going to do, and uses that to improve performance, tell the user what's going to be done, etc.
As an example of the first, I've been working on designing a kind of OLAP calculation language using Applicatives. The type admits of a Monad instance, but I've deliberately avoided having that, because I want the queries to be less powerful than what Monad would allow. Applicative means that each calculation will bottom out to a predictable number of queries.
As an example of the latter, I'll use a toy example from my still-under-development operational Applicative library. If you write the Reader monad as an operational Applicative program instead, you can examine the resulting Readers to count how many times they use the ask operation:
{-# LANGUAGE GADTs, RankNTypes, ScopedTypeVariables #-}
import Control.Applicative.Operational
-- | A 'Reader' is an 'Applicative' program that uses the 'ReaderI'
-- instruction set.
type Reader r a = ProgramAp (ReaderI r) a
-- | The only 'Reader' instruction is 'Ask', which requires both the
-- environment and result type to be #r#.
data ReaderI r a where
Ask :: ReaderI r r
ask :: Reader r r
ask = singleton Ask
-- | We run a 'Reader' by translating each instruction in the instruction set
-- into an #r -> a# function. In the case of 'Ask' the translation is 'id'.
runReader :: forall r a. Reader r a -> r -> a
runReader = interpretAp evalI
where evalI :: forall x. ReaderI r x -> r -> x
evalI Ask = id
-- | Count how many times a 'Reader' uses the 'Ask' instruction. The 'viewAp'
-- function translates a 'ProgramAp' into a syntax tree that we can inspect.
countAsk :: forall r a. Reader r a -> Int
countAsk = count . viewAp
where count :: forall x. ProgramViewAp (ReaderI r) x -> Int
-- Pure :: a -> ProgamViewAp instruction a
count (Pure _) = 0
-- (:<**>) :: instruction a
-- -> ProgramViewAp instruction (a -> b)
-- -> ProgramViewAp instruction b
count (Ask :<**> k) = succ (count k)
As best as I understand, you can't write countAsk if you implement Reader as a monad. (My understanding comes from asking right here in Stack Overflow, I'll add.)
This same motive is actually one of the ideas behind Arrows. One of the big motivating examples for Arrow was a parser combinator design that uses "static information" to get better performance than monadic parsers. What they mean by "static information" is more or less the same as in my Reader example: it's possible to write an Arrow instance where the parsers can be inspected very much like my Readers can. Then the parsing library can, before executing a parser, inspect it to see if it can predict ahead of time that it will fail, and skip it in that case.
In one of the direct comments to your question, jberryman mentions that arrows may in fact be losing popularity. I'd add that as I see it, Applicative is what arrows are losing popularity to.
References:
Paolo Capriotti & Ambrus Kaposi, "Free Applicative Functors". Very highly recommended.
Gergo Erdi, "Static analysis with Applicatives". Inspirational, but I it hard to follow...
The question isn't quite right. It's like asking why would you eat oranges instead of apples, since apples seem more nutritious all around.
Arrows, like monads, are a way of expressing computations, but they have to obey a different set of laws. In particular, the laws tend to make arrows nicer to use when you have function-like things.
The Haskell Wiki lists a few introductions to arrows. In particular, the Wikibook is a nice high level introduction, and the tutorial by John Hughes is a good overview of the various kinds of arrows.
For a real world example, compare this tutorial which uses Hakyll 3's arrow-based interface, with roughly the same thing in Hakyll 4's monad-based interface.
I always found one of the really practical use cases of arrows to be stream programming.
Look at this:
data Stream a = Stream a (Stream a)
data SF a b = SF (a -> (b, SF a b))
SF a b is a synchronous stream function.
You can define a function from it that transforms Stream a into Stream b that never hangs and always outputs one b for one a:
(<<$>>) :: SF a b -> Stream a -> Stream b
SF f <<$>> Stream a as = let (b, sf') = f a
in Stream b $ sf' <<$>> as
There is an Arrow instance for SF. In particular, you can compose SFs:
(>>>) :: SF a b -> SF b c -> SF a c
Now try to do this in monads. It doesn't work well. You might say that Stream a == Reader Nat a and thus it's a monad, but the monad instance is very inefficient. Imagine the type of join:
join :: Stream (Stream a) -> Stream a
You have to extract the diagonal from a stream of streams. This means O(n) complexity for the nth element, but using the Arrow instance for SFs gives you O(1) in principle! (And also deals with time and space leaks.)

Haskell: Computation "in a monad" -- meaning?

In reading about monads, I keep seeing phrases like "computations in the Xyz monad". What does it mean for a computation to be "in" a certain monad?
I think I have a fair grasp on what monads are about: allowing computations to produce outputs that are usually of some expected type, but can alternatively or additionally convey some other information, such as error status, logging info, state and so on, and allow such computations to be chained.
But I don't get how a computation would be said to be "in" a monad. Does this just refer to a function that produces a monadic result?
Examples: (search "computation in")
http://www.haskell.org/tutorial/monads.html
http://www.haskell.org/haskellwiki/All_About_Monads
http://ertes.de/articles/monads.html
Generally, a "computation in a monad" means not just a function returning a monadic result, but such a function used inside a do block, or as part of the second argument to (>>=), or anything else equivalent to those. The distinction is relevant to something you said in a comment:
"Computation" occurs in func f, after val extracted from input monad, and before result is wrapped as monad. I don't see how the computation per se is "in" the monad; it seems conspicuously "out" of the monad.
This isn't a bad way to think about it--in fact, do notation encourages it because it's a convenient way to look at things--but it does result in a slightly misleading intuition. Nowhere is anything being "extracted" from a monad. To see why, forget about (>>=)--it's a composite operation that exists to support do notation. The more fundamental definition of a monad are three orthogonal functions:
fmap :: (a -> b) -> (m a -> m b)
return :: a -> m a
join :: m (m a) -> m a
...where m is a monad.
Now think about how to implement (>>=) with these: starting with arguments of type m a and a -> m b, your only option is using fmap to get something of type m (m b), after which you can use join to flatten the nested "layers" to get just m b.
In other words, nothing is being taken "out" of the monad--instead, think of the computation as going deeper into the monad, with successive steps being collapsed into a single layer of the monad.
Note that the monad laws are also much simpler from this perspective--essentially, they say that when join is applied doesn't matter as long as the nesting order is preserved (a form of associativity) and that the monadic layer introduced by return does nothing (an identity value for join).
Does this just refer to a function that produces a monadic result?
Yes, in short.
In long, it's because Monad allows you to inject values into it (via return) but once inside the Monad they're stuck. You have to use some function like evalWriter or runCont which is strictly more specific than Monad to get values back "out".
More than that, Monad (really, its partner, Applicative) is the essence of having a "container" and allowing computations to happen inside of it. That's what (>>=) gives you, the ability to do interesting computations "inside" the Monad.
So functions like Monad m => m a -> (a -> m b) -> m b let you compute with and around and inside a Monad. Functions like Monad m => a -> m a let you inject into the Monad. Functions like m a -> a would let you "escape" the Monad except they don't exist in general (only in specific). So, for conversation's sake we like to talk about functions that have result types like Monad m => m a as being "inside the monad".
Usually monad stuff is easier to grasp when starting with "collection-like" monads as example. Imagine you calculate the distance of two points:
data Point = Point Double Double
distance :: Point -> Point -> Double
distance p1 p2 = undefined
Now you may have a certain context. E.g. one of the points may be "illegal" because it is out of some bounds (e.g. on the screen). So you wrap your existing computation in the Maybe monad:
distance :: Maybe Point -> Maybe Point -> Maybe Double
distance p1 p2 = undefined
You have exactly the same computation, but with the additional feature that there may be "no result" (encoded as Nothing).
Or you have a have a two groups of "possible" points, and need their mutual distances (e.g. to use later the shortest connection). Then the list monad is your "context":
distance :: [Point] -> [Point] -> [Double]
distance p1 p2 = undefined
Or the points are entered by a user, which makes the calculation "nondeterministic" (in the sense that you depend on things in the outside world, which may change), then the IO monad is your friend:
distance :: IO Point -> IO Point -> IO Double
distance p1 p2 = undefined
The computation remains always the same, but happens to take place in a certain "context", which adds some useful aspects (failure, multi-value, nondeterminism). You can even combine these contexts (monad transformers).
You may write a definition that unifies the definitions above, and works for any monad:
distance :: Monad m => m Point -> m Point -> m Double
distance p1 p2 = do
Point x1 y1 <- p1
Point x2 y2 <- p2
return $ sqrt ((x1-x2)^2 + (y1-y2)^2)
That proves that our computation is really independent from the actual monad, which leads to formulations as "x is computed in(-side) the y monad".
Looking at the links you provided, it seems that a common usage of "computation in" is with regards to a single monadic value. Excerpts:
Gentle introduction - here we run a computation in the SM monad, but the computation is the monadic value:
-- run a computation in the SM monad
runSM :: S -> SM a -> (a,S)
All about monads - previous computation refers to a monadic value in the sequence:
The >> function is a convenience operator that is used to bind a monadic computation that does not require input from the previous computation in the sequence
Understanding monads - here the first computation could refer to e.g. getLine, a monadic value :
(binding) gives an intrinsic idea of using the result of a computation in another computation, without requiring a notion of running computations.
So as an analogy, if I say i = 4 + 2, then i is the value 6, but it is equally a computation, namely the computation 4 + 2. It seems the linked pages uses computation in this sense - computation as a monadic value - at least some of the time, in which case it makes sense to use the expression "a computation in" the given monad.
Consider the IO monad. A value of type IO a is a description of a large (often infinite) number of behaviours where a behaviour is a sequence of IO events (reads, writes, etc). Such a value is called a "computation"; in this case it is a computation in the IO monad.

Are monads just ways of composing functions which would otherwise not compose?

The bind function seems remarkably similar like a composition function. And it helps in composing functions which return monads.
Is there anything more enlightening about monads than this idea?
Is there anything more enlightening about monads than this idea?
Yes, very much so!
Monadic binding is a way of composing functions where something else is happening over and above the application of a function to an input. What the something else is depends on the monad under consideration.
The Maybe monad is function composition with the possibility that one of the functions in the chain might fail, in which case the failure is automatically propagated to the end of the chain. The expression return x >>= f >>= g applies f to the value x. If the result is Nothing (i.e. failure) then the entire expression returns Nothing, with no other work taking place. Otherwise, g is applied to f x and its result is returned.
The Either e monad, where e is some type, is function composition with the possibility of failure with an error of type e. This is conceptually similar to the Maybe monad, but we get some more information about how and where the failure occured.
The List monad is function composition with the possibility of returning multiple values. If f and g are functions that return a list of outputs, then return x >>= f >>= g applies f to x, and then applies g to every output of f, collecting all of the outputs of these applications together into one big list.
Other monads represent function composition in various other contexts. Very briefly:
The Writer w monad is function composition with a value of type w being accumulated on the side. For example, often w = [String] (a list of strings) which is useful for logging.
The Reader r monad is function composition where each of the functions is also allowed to depend on a value of type r. This is useful when building evaluators for domain-specific languages, when r might be a map from variable names to values in the language - this allows simple implementation of lexical closures, for example.
The State s monad is a bit like a combination of reader and writer. It is function composition where each function is allowed to depend on, and modify, a value of type s.
The composition point of view is in fact quite enlightening in itself.
Monads can be seen as some of "funky composition" between functions of the form a -> Mb. You can compose f : a -> M b and g: b -> M c into something a -> M c, via the monad operations (just bind the return value of f into g).
This turns arrows of the form a -> M b as arrows of a category, termed the Kleisli category of M.
If M were not a monad but just a functor, you would be only able to compose fmap g and f into something (fmap g) . f :: a -> M (M c). Monads have join :: M (M a) -> M a that I let you define as an (easy and useful) exercise using only monad operations (for mathematicians, join is usually part of the definition of a monad). Then join . (fmap g) . f provides the composition for the Kleisli category.
All the funk of monadic composition can thus be seen to happen inside join, join represents the composition of side effects: for IO it sequences the effects, for List it concatenates lists, for Maybe it "stops a computation" when a result is Nothing, for Writer it sequences the writes, for State it sequences operations on the state, etc. It can be seen as an "overloadable semicolon" if you know C-like languages. It is very instructive to think about monads this way.
Of course, Dan Piponi explains this much better than I do, and here is some post of his that you may find enlightening: http://blog.sigfpe.com/2006/06/monads-kleisli-arrows-comonads-and.html

How to extract value from monadic action

Is there a built-in function with signature :: (Monad m) => m a -> a ?
Hoogle tells that there is no such function.
Can you explain why?
A monad only supplies two functions:
return :: Monad m => a -> m a
(>>=) :: Monad m => m a -> (a -> m b) -> m b
Both of these return something of type m a, so there is no way to combine these in any way to get a function of type Monad m => m a -> a. To do that, you'll need more than these two functions, so you need to know more about m than that it's a monad.
For example, the Identity monad has runIdentity :: Identity a -> a, and several monads have similar functions, but there is no way to provide it generically. In fact, the inability to "escape" from the monad is essential for monads like IO.
There is probably a better answer than this, but one way to see why you cannot have a type (Monad m) => m a -> a is to consider a null monad:
data Null a = Null
instance Monad Null where
return a = Null
ma >>= f = Null
Now (Monad m) => m a -> a means Null a -> a, ie getting something out of nothing. You can't do that.
This doesn't exist because Monad is a pattern for composition, not a pattern for decomposition. You can always put more pieces together with the interface it defines. It doesn't say a thing about taking anything apart.
Asking why you can't take something out is like asking why Java's Iterator interface doesn't contain a method for adding elements to what it's iterating over. It's just not what the Iterator interface is for.
And your arguments about specific types having a kind of extract function follows in the exact same way. Some particular implementation of Iterator might have an add function. But since it's not what Iterators are for, the presence that method on some particular instance is irrelevant.
And the presence of fromJust is just as irrelevant. It's not part of the behavior Monad is intended to describe. Others have given lots of examples of types where there is no value for extract to work on. But those types still support the intended semantics of Monad. This is important. It means that Monad is a more general interface than you are giving it credit for.
Suppose there was such a function:
extract :: Monad m => m a -> a
Now you could write a "function" like this:
appendLine :: String -> String
appendLine str = str ++ extract getLine
Unless the extract function was guaranteed never to terminate, this would violate referential transparency, because the result of appendLine "foo" would (a) depend on something other than "foo", (b) evaluate to different values when evaluated in different contexts.
Or in simpler words, if there was an actually useful extract operation Haskell would not be purely functional.
Is there a build-in function with signature :: (Monad m) => m a -> a ?
If Hoogle says there isn't...then there probably isn't, assuming your definition of "built in" is "in the base libraries".
Hoogle tells that there is no such function. Can you explain why?
That's easy, because Hoogle didn't find any function in the base libraries that matches that type signature!
More seriously, I suppose you were asking for the monadic explanation. The issues are safety and meaning. (See also my previous thoughts on magicMonadUnwrap :: Monad m => m a -> a)
Suppose I tell you I have a value which has the type [Int]. Since we know that [] is a monad, this is similar to telling you I have a value which has the type Monad m => m Int. So let's suppose you want to get the Int out of that [Int]. Well, which Int do you want? The first one? The last one? What if the value I told you about is actually an empty list? In that case, there isn't even an Int to give you! So for lists, it is unsafe to try and extract a single value willy-nilly like that. Even when it is safe (a non-empty list), you need a list-specific function (for example, head) to clarify what you mean by desiring f :: [Int] -> Int. Hopefully you can intuit from here that the meaning of Monad m => m a -> a is simply not well defined. It could hold multiple meanings for the same monad, or it could mean absolutely nothing at all for some monads, and sometimes, it's just simply not safe.
Because it may make no sense (actually, does make no sense in many instances).
For example, I might define a Parser Monad like this:
data Parser a = Parser (String ->[(a, String)])
Now there is absolutely no sensible default way to get a String out of a Parser String. Actually, there is no way at all to get a String out of this with just the Monad.
There is a useful extract function and some other functions related to this at http://hackage.haskell.org/package/comonad-5.0.4/docs/Control-Comonad.html
It's only defined for some functors/monads and it doesn't necessarily give you the whole answer but rather gives an answer. Thus there will be possible subclasses of comonad that give you intermediate stages of picking the answer where you could control it. Probably related to the possible subclasses of Traversable. I don't know if such things are defined anywhere.
Why hoogle doesn't list this function at all appears to be because the comonad package isn't indexed otherwise I think the Monad constraint would be warned and extract would be in the results for those Monads with a Comonad instance. Perhaps this is because the hoogle parser is incomplete and fails on some lines of code.
My alternative answers:
you can perform a - possibly recursive - case analysis if you've imported the type's constructors
You can slink your code that would use the extracted values into the monad using monad >>= \a -> return $ your code uses a here as an alternative code structure and as long as you can convert the monad to "IO ()" in a way that prints your outputs you're done. This doesn't look like extraction but maths isn't the same as the real world.
Well, technicaly there is unsafePerformIO for the IO monad.
But, as the name itself suggests, this function is evil and you should only use it if you really know what you are doing (and if you have to ask wether you know or not then you don't)

Resources