Related
I have learned the definitions of functor and monad, But I am still unable to figure out the difference between them except for the definitions. I have read some answers to this problem, In What is the difference between a Functor and a Monad? one comment say
A functor takes a pure function (and a functorial value) whereas a monad takes a Kleisli arrow, i.e. a function that returns a monad (and a monadic value). Hence you can chain two monads and the second monad can depend on the result of the previous one. You cannot do this with functors.
This comment is interesting and gives me a read of their difference. But I still have some questions.
why functor cannot use the result of previous one? since fmap :: (a -> b) -> f a -> f b, when I currying fmap with a pure function, I can get a f a -> f b function,f b depends on f a, Does the result mean data inside the functor?
In category theory I can understand the comment since I cannot get the element in category theory, but In Haskell I find out that I can use the result of functor since Haskell can remember the data constructor, Does Haskell prevent me from understanding this comment? Should I understand this in pure category theory?
When using fmap :: Functor f => (a -> b) -> f a -> f b, the argument function can never depend on any of the structure from the functor f itself; how can it, when all it ever gets passed are a values, nothing to do with f? When you use fmap to get an f a -> f b function, the code that is responsible for building the final f b value is the code of fmap itself, not the code of the a -> b function being mapped. The function being mapped might not even be called at all (e.g. if you're using the Maybe functor and the f a value is Nothing, there are no a values inside that for fmap to pass to the a -> b function, and it will never be called).
But fmap is fully polymorphic in a and b, meaning the code of fmap doesn't know what those types are and cannot assume anything about them. So fmap has to build the final f b value in a way that only depends on the functor structure added by f; the code of fmap cannot look at any a values and make decisions about what to do. In fact really the only thing a lawful fmap can do is return a value with exactly the same structure as the input f a value, only with any as that were inside replaced by the b value returned by the a -> b function. That's why you can just deriving Functor any data type; there's at most one way to make any given data type a Functor, and the compiler can just do it for you.
But the monad =<< function1 has this type: Monad m => (a -> m b) -> m a -> m b. Here the mapped function has type a -> m b. That means the final m b structure returned at the end isn't purely determined by the code of =<< (at least not necessarily). The a -> m b function knows what monad is being used and returns some monadic structure in its m b value, not just a raw b. The mapped function still doesn't receive any monadic structure, so its intermediate m b result can't depend on that; any dependency the final m b value has on the monadic structure in the original m a has to be determined by the code for =<< (which again, cannot make any decisions based on a values itself). But the mapped function (if it is called) does get to contribute some monadic structure, and that can depend on the a values (because the mapped function doesn't have to be completely polymorphic in a; it can inspect a values and make decisions based on them).
This means that ultimately any part of the final output m b might depend on any part of the input (if we don't know what function was mapped, or how the =<< function works internally). This is quite different from the functor case, where even without knowing anything about the specific functor or the mapped function we know that the final output will always look like "a copy of the input with as replaced by bs" (and specifically which b replaces each a is determined by the mapped function).
It's possibly worth noting: almost everything I've said here is a direct consequence of what these types mean:
Functor f => (a -> b) -> f a -> f b
Monad m => (a -> m a) -> m a -> m b
These aren't special properties of functors and monads you have to remember, it's just a consequence of those operations having those types. (The functor and monad laws express things that the types don't, but I've not needed to invoke those to explain this difference)
Once you are really deeply used to thinking about types the Haskell way, you can figure this out for yourself just by looking at the types. I didn't of course; I had it explained to me (a dozen times) when I was learning, and I've remembered it since then. But now I can figure out similar things about new APIs I've never seen before, without any tutorials or explanations.
1 I'm using the reversed =<< operator rather than the traditional >>= operator merely because it lines up better with the fmap / <$> operator.
A monad is a functor, or rather a functor along with two additional operations. (The original(?) name for a monad was a triple.)
It helps to look at the original mathematical definition, which provided an operation unit :: Monad m => a -> m a, but instead of (>>=) :: Monad m => m a -> (a -> m b) -> m b, defined an operation called join :: Monad m => m (m a) -> m a, which lets you "remove" a layer of lifting. The two formulations are equivalent, as join can be defined in terms of (>>=)
join ma = ma >>= id
and vice versa:
ma >>= f = join $ fmap f ma
The last definition gives a different interpretation of a monad: it's a functor that you can "compress"; chaining comes for free by mapping a function over an already lifted value, then combining the two layers of lifting back to a single layer.
unit is required in either case because there's no way to get a lifted value in the first place with just fmap.
I would like to create a type that is an instance of Monad such that the result type can only be an instance of a specific typeclass. I would like to be able to write something like
data T a = T a
class C a where
...
instance Monad T where
return :: (C a) => a -> m a
return x = ...
(>>=) :: (C a, C b) => m a -> (a -> m b) -> m b
p >>= f = ...
In the actual code that I'm working on, I need a typeclass constraint on the result type so that a specific function from the typeclass is available in the definitions for return and (>>=).
Is there any way to do this?
As per #chi's comment, the best solution is probably to use either constrained monads or indexed monads. Both solutions have the caveat of having to work around the standard library, but it can be done with workarounds such as rebinding syntax using the RebindableSyntax extension for example. The best example of constrained monads I was able to find was the constrained-monads package available on Hackage, while the best example I was able to find of indexed monads was a post at Kwang's Haskell Blog.
There are quite a few of questions here about whether or not certain transformations of types that involve Monads are possible.
For instance, it's possible to make a function of type f :: Monad m => [m a] -> m [a], but impossible to make a function of type g :: Monad m => m [a] -> [m a] as a proper antifunction to the former. (IE: f . g = id)
I want to understand what rules one can use to determine if a function of that type can or cannot be constructed, and why these types cannot be constructed if they disobey these rules.
The way that I've always thought about monads is that a value of type Monad m => m a is some program of type m that executes and produces an a. The monad laws reinforce this notion by thinking of composition of these programs as "do thing one then do thing two", and produce some sort of combination of the results.
Right unit Taking a program and just returning its value should
be the same as just running the original program.
m >>= return = m
Left unit If you create a simple program that just returns a value,
and then pass that value to a function that creates a new program, then
the resulting program should just be as if you called the function on the
value.
return x >>= f = f x
Associativity If you execute a program m, feed its result into a function f that produces another program, and then feed that result into a third function g that also produces a program, then this is identical to creating a new function that returns a program based on feeding the result of f into g, and feeding the result of m into it.
(m >>= f) >>= g = m >>= (\x -> f x >>= g)
Using this intuition about a "program that creates a value" can come to some conclusions about what it means for the functions that you've provided in your examples.
Monad m => [m a] -> m [a] Deviating from the intuitive definition of what this function should do is hard: Execute each program in sequence and collect the results. This produces another program that produces a list of results.
Monad m => m [a] -> [m a] This doesn't really have a clear intuitive definition, since it's a program that produces a list. You can't create a list without getting access to the resulting values which in this case means executing a program. Certain monads, that have a clear way to extract a value from a program, and provide some variant of m a -> a, like the State monad, can have sane implementations of some function like this. It would have to be application specific though. Other monads, like IO, you cannot escape from.
Monad m => (a -> m b) -> m (a -> b) This also doesn't really have a clear intuitive definition. Here you have a function f that produces a program of type m b, but you're trying to return a function of type m (a -> b). Based on the a, f creates completely different programs with different executing semantics. You cannot encompass these variations in a single program of type m (a -> b), even if you can provide a proper mapping of a -> b in the programs resulting value.
This intuition doesn't really encompass the idea behind monads completely. For example, the monadic context of a list doesn't really behave like a program.
Something easy to remember is : "you can't escape from a Monad" (it's kind of design for it). Transforming m [a] to [m a] is a form of escape, so you can't.
On the other hand you can easily create a Monad from something (using return) so traversing ([m a] -> m [a]) is usually possible.
If you take a look at "Monad laws", monad only constrain you to define a composition function but not reverse function.
In the first example you can compose the list elements.
In the second one Monad m => m [a] -> [m a], you cannot split an action into multiple actions ( action composition is not reversible).
Example:
Let's say you have to read 2 values.
s1 <- action
s2 <- action
Doing so, action result s2 depends by the side effect made by s1.
You can bind these 2 actions in 1 action to be executed in the same order, but you cannot split them and execute action from s2 without s1 made the side effect needed by the second one.
Not really an answer, and much too informal for my linking, but nevertheless I have a few interesting observations that won't fit into a comment. First, let's consider this function you refer to:
f :: Monad m => [m a] -> m [a]
This signature is in fact stronger than it needs to be. The current generalization of this is the sequenceA function from Data.Traversable:
sequenceA :: (Traversable t, Applicative f) -> t (f a) -> f (t a)
...which doesn't need the full power of Monad, and can work with any Traversable and not just lists.
Second: the fact that Traversable only requires Applicative is I think really significant to this question, because applicative computations have a "list-like" structure. Every applicative computation can be rewritten to have the form f <$> a1 <*> ... <*> an for some f. Or, informally, every applicative computation can be seen as a list of actions a1, ... an (heterogeneous on the result type, homogeneous in the functor type), plus an n-place function to combine their results.
If we look at sequenceA through this lens, all it does is choose an f built out of the appropriate nested number of list constructors:
sequenceA [a1, ..., an] == f <$> a1 <*> ... <*> an
where f v1 ... vn = v1 : ... : vn : []
Now, I haven't had the chance to try and prove this yet, but my conjectures would be the following:
Mathematically speaking at least, sequenceA has a left inverse in free applicative functors. If you have a Functor f => [FreeA f a] and you sequenceA it, what you get is a list-like structure that contains those computations and a combining function that makes a list out of their results. I suspect however that it's not possible to write such a function in Haskell (unSequenceFreeA :: (Traversable t, Functor f) => FreeA f (t a) -> Maybe (t (Free f a))), because you can't pattern match on the structure of the combining function in the FreeA to tell that it's of the form f v1 ... vn = v1 : ... : vn : [].
sequenceA doesn't have a right inverse in a free applicative, however, because the combining function that produces a list out of the results from the a1, ... an actions may do anything; for example, return a constant list of arbitrary length (unrelated to the computations that the free applicative value performs).
Once you move to non-free applicative functors, there will no longer be a left inverse for sequenceA, because the non-free applicative functor's equations translate into cases where you can no longer tell apart which of two t (f a) "action lists" was the source for a given f (t a) "list-producing action."
Arrows seem to be gaining popularity in the Haskell community, but it seems to me like Monads are more powerful. What is gained by using Arrows? Why can't Monads be used instead?
Every monad gives rise to an arrow
newtype Kleisli m a b = Kleisli (a -> m b)
instance Monad m => Category (Kleisli m) where
id = Kleisli return
(Kleisli f) . (Kleisli g) = Kleisli (\x -> (g x) >>= f)
instance Monad m => Arrow (Kleisli m) where
arr f = Kleisli (return . f)
first (Kleisli f) = Kleisli (\(a,b) -> (f a) >>= \fa -> return (fa,b))
But, there are arrows which are not monads. Thus, there are arrows which do things that you can't do with monads. A good example is the arrow transformer to add some static information
data StaticT m c a b = StaticT m (c a b)
instance (Category c, Monoid m) => Category (StaticT m c) where
id = StaticT mempty id
(StaticT m1 f) . (StaticT m2 g) = StaticT (m1 <> m2) (f . g)
instance (Arrow c, Monoid m) => Arrow (StaticT m c) where
arr f = StaticT mempty (arr f)
first (StaticT m f) = StaticT m (first f)
this arrow tranformer is usefull because it can be used to keep track of static properties of a program. For example, you can use this to instrument your API to statically measure how many calls you are making.
I've always found it difficult to think of the issue in these terms: what is gained by using arrows. As other commenters have mentioned, every monad can trivially be turned into an arrow. So a monad can do all the arrow-y things. However, we can make Arrows that are not monads. That is to say, we can make types that can do these arrow-y things without making them support monadic binding. It might not seem like the case, but the monadic bind function is actually a pretty restrictive (hence powerful) operation that disqualifies many types.
See, to support bind, you have to be able to assert that that regardless of the input type, what's going to come out is going to be wrapped in the monad.
(>>=) :: forall a b. m a -> (a -> m b) -> m b
But, how would we define bind for a type like data Foo a = F Bool a Surely, we could combine one Foo's a with another's but how would we combine the Bools. Imagine that the Bool marked, say, whether or not the value of the other parameter had changed. If I have a = Foo False whatever and I bind it into a function, I have no idea whether or not that function is going to change whatever. I can't write a bind that correctly sets the Bool. This is often called the problem of static meta-information. I cannot inspect the function being bound into to determine whether or not it will alter whatever.
There are several other cases like this: types that represent mutating functions, parsers that can exit early, etc. But the basic idea is this: monads set a high bar that not all types can clear. Arrows allow you to compose types (that may or may not be able to support this high, binding standard) in powerful ways without having to satisfy bind. Of course, you do lose some of the power of monads.
Moral of the story: there's nothing an arrow can do that monad cannot, because a monad can always be made into an arrow. However, sometimes you can't make your types into monads but you still want to allow them to have most of the compositional flexibility and power of monads.
Many of these ideas were inspired by the superb Understanding Haskell Arrows (backup)
Well, I'm going to cheat slightly here by changing the question from Arrow to Applicative. A lot of the same motives apply, and I know applicatives better than arrows. (And in fact, every Arrow is also an Applicative but not vice-versa, so I'm just taking it down a bit further down the slope to Functor.)
Just like every Monad is an Arrow, every Monad is also an Applicative. There are Applicatives that are not Monads (e.g., ZipList), so that's one possible answer.
But assume we're dealing with a type that admits of a Monad instance as well as an Applicative. Why might we sometime use the Applicative instance instead of Monad? Because Applicative is less powerful, and that comes with benefits:
There are things that we know that the Monad can do which the Applicative cannot. For example, if we use the Applicative instance of IO to assemble a compound action from simpler ones, none of the actions we compose may use the results of any of the others. All that applicative IO can do is execute the component actions and combine their results with pure functions.
Applicative types can be written so that we can do powerful static analysis of the actions before executing them. So you can write a program that inspects an Applicative action before executing it, figures out what it's going to do, and uses that to improve performance, tell the user what's going to be done, etc.
As an example of the first, I've been working on designing a kind of OLAP calculation language using Applicatives. The type admits of a Monad instance, but I've deliberately avoided having that, because I want the queries to be less powerful than what Monad would allow. Applicative means that each calculation will bottom out to a predictable number of queries.
As an example of the latter, I'll use a toy example from my still-under-development operational Applicative library. If you write the Reader monad as an operational Applicative program instead, you can examine the resulting Readers to count how many times they use the ask operation:
{-# LANGUAGE GADTs, RankNTypes, ScopedTypeVariables #-}
import Control.Applicative.Operational
-- | A 'Reader' is an 'Applicative' program that uses the 'ReaderI'
-- instruction set.
type Reader r a = ProgramAp (ReaderI r) a
-- | The only 'Reader' instruction is 'Ask', which requires both the
-- environment and result type to be #r#.
data ReaderI r a where
Ask :: ReaderI r r
ask :: Reader r r
ask = singleton Ask
-- | We run a 'Reader' by translating each instruction in the instruction set
-- into an #r -> a# function. In the case of 'Ask' the translation is 'id'.
runReader :: forall r a. Reader r a -> r -> a
runReader = interpretAp evalI
where evalI :: forall x. ReaderI r x -> r -> x
evalI Ask = id
-- | Count how many times a 'Reader' uses the 'Ask' instruction. The 'viewAp'
-- function translates a 'ProgramAp' into a syntax tree that we can inspect.
countAsk :: forall r a. Reader r a -> Int
countAsk = count . viewAp
where count :: forall x. ProgramViewAp (ReaderI r) x -> Int
-- Pure :: a -> ProgamViewAp instruction a
count (Pure _) = 0
-- (:<**>) :: instruction a
-- -> ProgramViewAp instruction (a -> b)
-- -> ProgramViewAp instruction b
count (Ask :<**> k) = succ (count k)
As best as I understand, you can't write countAsk if you implement Reader as a monad. (My understanding comes from asking right here in Stack Overflow, I'll add.)
This same motive is actually one of the ideas behind Arrows. One of the big motivating examples for Arrow was a parser combinator design that uses "static information" to get better performance than monadic parsers. What they mean by "static information" is more or less the same as in my Reader example: it's possible to write an Arrow instance where the parsers can be inspected very much like my Readers can. Then the parsing library can, before executing a parser, inspect it to see if it can predict ahead of time that it will fail, and skip it in that case.
In one of the direct comments to your question, jberryman mentions that arrows may in fact be losing popularity. I'd add that as I see it, Applicative is what arrows are losing popularity to.
References:
Paolo Capriotti & Ambrus Kaposi, "Free Applicative Functors". Very highly recommended.
Gergo Erdi, "Static analysis with Applicatives". Inspirational, but I it hard to follow...
The question isn't quite right. It's like asking why would you eat oranges instead of apples, since apples seem more nutritious all around.
Arrows, like monads, are a way of expressing computations, but they have to obey a different set of laws. In particular, the laws tend to make arrows nicer to use when you have function-like things.
The Haskell Wiki lists a few introductions to arrows. In particular, the Wikibook is a nice high level introduction, and the tutorial by John Hughes is a good overview of the various kinds of arrows.
For a real world example, compare this tutorial which uses Hakyll 3's arrow-based interface, with roughly the same thing in Hakyll 4's monad-based interface.
I always found one of the really practical use cases of arrows to be stream programming.
Look at this:
data Stream a = Stream a (Stream a)
data SF a b = SF (a -> (b, SF a b))
SF a b is a synchronous stream function.
You can define a function from it that transforms Stream a into Stream b that never hangs and always outputs one b for one a:
(<<$>>) :: SF a b -> Stream a -> Stream b
SF f <<$>> Stream a as = let (b, sf') = f a
in Stream b $ sf' <<$>> as
There is an Arrow instance for SF. In particular, you can compose SFs:
(>>>) :: SF a b -> SF b c -> SF a c
Now try to do this in monads. It doesn't work well. You might say that Stream a == Reader Nat a and thus it's a monad, but the monad instance is very inefficient. Imagine the type of join:
join :: Stream (Stream a) -> Stream a
You have to extract the diagonal from a stream of streams. This means O(n) complexity for the nth element, but using the Arrow instance for SFs gives you O(1) in principle! (And also deals with time and space leaks.)
Most tutorials seem to give a lot of examples of monads (IO, state, list and so on) and then expect the reader to be able to abstract the overall principle and then they mention category theory. I don't tend to learn very well by trying generalise from examples and I would like to understand from a theoretical point of view why this pattern is so important.
Judging from this thread:
Can anyone explain Monads?
this is a common problem, and I've tried looking at most of the tutorials suggested (except the Brian Beck videos which won't play on my linux machine):
Does anyone know of a tutorial that starts from category theory and explains IO, state, list monads in those terms? the following is my unsuccessful attempt to do so:
As I understand it a monad consists of a triple: an endo-functor and two natural transformations.
The functor is usually shown with the type:
(a -> b) -> (m a -> m b)
I included the second bracket just to emphasise the symmetry.
But, this is an endofunctor, so shouldn't the domain and codomain be the same like this?:
(a -> b) -> (a -> b)
I think the answer is that the domain and codomain both have a type of:
(a -> b) | (m a -> m b) | (m m a -> m m b) and so on ...
But I'm not really sure if that works or fits in with the definition of the functor given?
When we move on to the natural transformation it gets even worse. If I understand correctly a natural transformation is a second order functor (with certain rules) that is a functor from one functor to another one. So since we have defined the functor above the general type of the natural transformations would be:
((a -> b) -> (m a -> m b)) -> ((a -> b) -> (m a -> m b))
But the actual natural transformations we are using have type:
a -> m a
m a -> (a ->m b) -> m b
Are these subsets of the general form above? and why are they natural transformations?
Martin
A quick disclaimer: I'm a little shaky on category theory in general, while I get the impression you have at least some familiarity with it. Hopefully I won't make too much of a hash of this...
Does anyone know of a tutorial that starts from category theory and explains IO, state, list monads in those terms?
First of all, ignore IO for now, it's full of dark magic. It works as a model of imperative computations for the same reasons that State works for modelling stateful computations, but unlike the latter IO is a black box with no way to deduce the monadic structure from the outside.
The functor is usually shown with the type: (a -> b) -> (m a -> m b) I included the second bracket just to emphasise the symmetry.
But, this is an endofunctor, so shouldn't the domain and codomain be the same like this?:
I suspect you are misinterpreting how type variables in Haskell relate to the category theory concepts.
First of all, yes, that specifies an endofunctor, on the category of Haskell types. A type variable such as a is not anything in this category, however; it's a variable that is (implicitly) universally quantified over all objects in the category. Thus, the type (a -> b) -> (a -> b) describes only endofunctors that map every object to itself.
Type constructors describe an injective function on objects, where the elements of the constructor's codomain cannot be described by any means except as an application of the type constructor. Even if two type constructors produce isomorphic results, the resulting types remain distinct. Note that type constructors are not, in the general case, functors.
The type variable m in the functor signature, then, represents a one-argument type constructor. Out of context this would normally be read as universal quantification, but that's incorrect in this case since no such function can exist. Rather, the type class definition binds m and allows the definition of such functions for specific type constructors.
The resulting function, then, says that for any type constructor m which has fmap defined, for any two objects a and b and a morphism between them, we can find a morphism between the types given by applying m to a and b.
Note that while the above does, of course, define an endofunctor on Hask, it is not even remotely general enough to describe all such endofunctors.
But the actual natural transformations we are using have type:
a -> m a
m a -> (a ->m b) -> m b
Are these subsets of the general form above? and why are they natural transformations?
Well, no, they aren't. A natural transformation is roughly a function (not a functor) between functors. The two natural transformations that specify a monad M look like I -> M where I is the identity functor, and M ∘ M -> M where ∘ is functor composition. In Haskell, we have no good way of working directly with either a true identity functor or with functor composition. Instead, we discard the identity functor to get just (Functor m) => a -> m a for the first, and write out nested type constructor application as (Functor m) => m (m a) -> m a for the second.
The first of these is obviously return; the second is a function called join, which is not part of the type class. However, join can be written in terms of (>>=), and the latter is more often useful in day-to-day programming.
As far as specific monads go, if you want a more mathematical description, here's a quick sketch of an example:
For some fixed type S, consider two functors F and G where F(x) = (S, x) and G(x) = S -> x (It should hopefully be obvious that these are indeed valid functors).
These functors are also adjoints; consider natural transformations unit :: x -> G (F x) and counit :: F (G x) -> x. Expanding the definitions gives us unit :: x -> (S -> (S, x)) and counit :: (S, S -> x) -> x. The types suggest uncurried function application and tuple construction; feel free to verify that those work as expected.
An adjunction gives rise to a monad by composition of the functors, so taking G ∘ F and expanding the definition, we get G (F x) = S -> (S, x), which is the definition of the State monad. The unit for the adjunction is of course return; and you should be able to use counit to define join.
This page does exactly that. I think your main confusion is that the class doesn't really make the Type a functor, but it defines a functor from the category of Haskell types into the category of that type.
Following the notation of the link, assuming F is a Haskell Functor, it means that there is a functor from the category of Hask to the category of F.
Roughly speaking, Haskell does its category theory all in just one category, whose objects are Haskell types and whose arrows are functions between these types. It's definitely not a general-purpose language for modelling category theory.
A (mathematical) functor is an operation turning things in one category into things in another, possibly entirely different, category. An endofunctor is then a functor which happens to have the same source and target categories. In Haskell, a functor is an operation turning things in the category of Haskell types into other things also in the category of Haskell types, so it is always an endofunctor.
[If you're following the mathematical literature, technically, the operation '(a->b)->(m a -> m b)' is just the arrow part of the endofunctor m, and 'm' is the object part]
When Haskellers talk about working 'in a monad' they really mean working in the Kleisli category of the monad. The Kleisli category of a monad is a thoroughly confusing beast at first, and normally needs at least two colours of ink to give a good explanation, so take the following attempt for what it is and check out some references (unfortunately Wikipedia is useless here for all but the straight definitions).
Suppose you have a monad 'm' on the category C of Haskell types. Its Kleisli category Kl(m) has the same objects as C, namely Haskell types, but an arrow a ~(f)~> b in Kl(m) is an arrow a -(f)-> mb in C. (I've used a squiggly line in my Kleisli arrow to distinguish the two). To reiterate: the objects and arrows of the Kl(C) are also objects and arrows of C but the arrows point to different objects in Kl(C) than in C. If this doesn't strike you as odd, read it again more carefully!
Concretely, consider the Maybe monad. Its Kleisli category is just the collection of Haskell types, and its arrows a ~(f)~> b are functions a -(f)-> Maybe b. Or consider the (State s) monad whose arrows a ~(f)~> b are functions a -(f)-> (State s b) == a -(f)-> (s->(s,b)). In any case, you're always writing a squiggly arrow as a shorthand for doing something to the type of the codomain of your functions.
[Note that State is not a monad, because the kind of State is * -> * -> *, so you need to supply one of the type parameters to turn it into a mathematical monad.]
So far so good, hopefully, but suppose you want to compose arrows a ~(f)~> b and b ~(g)~> c. These are really Haskell functions a -(f)-> mb and b -(g)-> mc which you cannot compose because the types don't match. The mathematical solution is to use the 'multiplication' natural transformation u:mm->m of the monad as follows: a ~(f)~> b ~(g)~> c == a -(f)-> mb -(mg)-> mmc -(u_c)-> mc to get an arrow a->mc which is a Kleisli arrow a ~(f;g)~> c as required.
Perhaps a concrete example helps here. In the Maybe monad, you cannot compose functions f : a -> Maybe b and g : b -> Maybe c directly, but by lifting g to
Maybe_g :: Maybe b -> Maybe (Maybe c)
Maybe_g Nothing = Nothing
Maybe_g (Just a) = Just (g a)
and using the 'obvious'
u :: Maybe (Maybe c) -> Maybe c
u Nothing = Nothing
u (Just Nothing) = Nothing
u (Just (Just c)) = Just c
you can form the composition u . Maybe_g . f which is the function a -> Maybe c that you wanted.
In the (State s) monad, it's similar but messier: Given two monadic functions a ~(f)~> b and b ~(g)~> c which are really a -(f)-> (s->(s,b)) and b -(g)-> (s->(s,c)) under the hood, you compose them by lifting g into
State_s_g :: (s->(s,b)) -> (s->(s,(s->(s,c))))
State_s_g p s1 = let (s2, b) = p s1 in (s2, g b)
then you apply the 'multiplication' natural transformation u, which is
u :: (s->(s,(s->(s,c)))) -> (s->(s,c))
u p1 s1 = let (s2, p2) = p1 s1 in p2 s2
which (sort of) plugs the final state of f into the initial state of g.
In Haskell, this turns out to be a bit of an unnatural way to work so instead there's the (>>=) function which basically does the same thing as u but in a way that makes it easier to implement and use. This is important: (>>=) is not the natural transformation 'u'. You can define each in terms of the other, so they're equivalent, but they're not the same thing. The Haskell version of 'u' is written join.
The other thing missing from this definition of Kleisli categories is the identity on each object: a ~(1_a)~> a which is really a -(n_a)-> ma where n is the 'unit' natural transformation. This is written return in Haskell, and doesn't seem to cause as much confusion.
I learned category theory before I came to Haskell, and I too have had difficulty with the mismatch between what mathematicians call a monad and what they look like in Haskell. It's no easier from the other direction!
Not sure I understand what was the question but yes, you are right, monad in Haskell is defined as a triple:
m :: * -> * -- this is endofunctor from haskell types to haskell types!
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
but common definition from category theory is another triple:
m :: * -> *
return :: a -> m a
join :: m (m a) -> m a
It is slightly confusing but it's not so hard to show that these two definitions are equal.
To do that we need to define join in terms of (>>=) (and vice versa).
First step:
join :: m (m a) -> m a
join x = ?
This gives us x :: m (m a).
All we can do with something that have type m _ is to aply (>>=) to it:
(x >>=) :: (m a -> m b) -> m b
Now we need something as a second argument for (>>=), and also,
from the type of join we have constraint (x >>= y) :: ma.
So y here will have type y :: ma -> ma and id :: a -> a fits it very well:
join x = x >>= id
The other way
(>>=) :: ma -> (a -> mb) -> m b
(>>=) x y = ?
Where x :: m a and y :: a -> m b.
To get m b from x and y we need something of type a.
Unfortunately, we can't extract a from m a. But we can substitute it for something else (remember, monad is a functor also):
fmap :: (a -> b) -> m a -> m b
fmap y x :: m (m b)
And it's perfectly fits as argument for join: (>>=) x y = join (fmap y x).
The best way to look at monads and computational effects is to start with where Haskell got the notion of monads for computational effects from, and then look at Haskell after you understand that. See this paper in particular: Notions of Computation and Monads, by E. Moggi.
See also Moggi's earlier paper which shows how monads work for the lambda calculus alone: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.26.2787
The fact that monads capture substitution, among other things (http://blog.sigfpe.com/2009/12/where-do-monads-come-from.html), and substitution is key to the lambda calculus, should give a good clue as to why they have so much expressive power.
While monads originally came from category theory, this doesn't mean that category theory is the only abstract context in which you can view them. A different viewpoint is given by operational semantics. For an introduction, have a look at my Operational Monad Tutorial.
One way to look at IO is to consider it as a strange kind of state monad. Remember that the state monad looks like:
data State s a = State (s -> (s, a))
where the "s" argument is the data type you want to thread through the computation. Also, this version of "State" doesn't have "get" and "put" actions and we don't export the constructor.
Now imagine a type
data RealWorld = RealWorld ......
This has no real definition, but notionally a value of type RealWorld holds the state of the entire universe outside the computer. Of course we can never have a value of type RealWorld, but you can imagine something like:
getChar :: RealWorld -> (RealWorld, Char)
In other words the "getChar" function takes a state of the universe before the keyboard button has been pressed, and returns the key pressed plus the state of the universe after the key has been pressed. Of course the problem is that the previous state of the world is still available to be referenced, which can't happen in reality.
But now we write this:
type IO = State RealWorld
getChar :: IO Char
Notionally, all we have done is wrap the previous version of "getChar" as a state action. But by doing this we can no longer access the "RealWorld" values because they are wrapped up inside the State monad.
So when a Haskell program wants to change a lightbulb it takes hold of the bulb and applies a "rotate" function to the RealWorld value inside IO.
For me, so far, the explanation that comes closest to tie together monads in category theory and monads in Haskell is that monads are a monid whose objects have the type a->m b. I can see that these objects are very close to an endofunctor and so the composition of such functions are related to an imperative sequence of program statements. Also functions which return IO functions are valid in pure functional code until the inner function is called from outside.
This id element is 'a -> m a' which fits in very well but the multiplication element is function composition which should be:
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> (a -> m c)
This is not quite function composition, but close enough (I think to get true function composition we need a complementary function which turns m b back into a, then we get function composition if we apply these in pairs?), I'm not quite sure how to get from that to this:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
I've got a feeling I may have seen an explanation of this in all the stuff that I read, without understanding its significance the first time through, so I will do some re-reading to try to (re)find an explanation of this.
The other thing I would like to do is link together all the different category theory explanations: endofunctor+2 natural transformations, Kleisli category, a monoid whose objects are monoids and so on. For me the thing that seems to link all these explanations is that they are two level. That is, normally we treat category objects as black-boxes where we imply their properties from their outside interactions, but here there seems to be a need to go one level inside the objects to see what’s going on? We can explain monads without this but only if we accept apparently arbitrary constructions.
Martin
See this question: is chaining operations the only thing that the monad class solves?
In it, I explain my vision that we must differentiate between the Monad class and individual types that solve individual problems. The Monad class, by itself, only solve the important problem of "chaining operations with choice" and mades this solution available to types being instance of it (by means of "inheritance").
On the other hand, if a given type that solves a given problem faces the problem of "chaining operations with choice" then, it should be made an instance (inherit) of the Monad class.
The fact is that problems not get solved merely by being a Monad. It would be like saying that "wheels" solve many problems, but actually "wheels" only solve a problem, and things with wheels solve many different problems.