How does an environment happen to be a partially applied function, and even a hom functor? - haskell

I have seen Reader being used to major benefit many times in the wild. (One notable example would be stack, built around a straightforward derivative of Reader that can inform the user of the sufficiency of its contents on the type level.) After some thinking, I arrived to an understanding that this benefit is merely on the level of code structure, and, in a sense, all that Reader does is supply a parameter, in many places, to a complicated wiring of functions. That is, I came to believe we can always replace a reader that holds some x with a lambda abstraction of form λx. ... x ... x .... This seems to align with the official explanations that claim:
... the partially applied function type (->) r is a simple reader monad ...
However, there is a long way from noting that Reader is a way to write down a lambda abstraction piecewise, to claiming that it is a partially applied function.
Is there a function that is applied, but not partially? It would simply be a value:
λ :t id 1
id 1 :: Num a => a
λ :t 1
1 :: Num a => a
Is there a function that is not even partially applied? We'll never know:
λ :t fromMaybe
fromMaybe :: a -> Maybe a -> a
λ :t flip maybe id
flip maybe id :: b -> Maybe b -> b
Even ignoring this as nitpicking, I would not take on faith that (->) r (why not just write (r ->)?) is exactly and singularly a Reader monad. Maybe I could write an instance that typechecks. Maybe it would even obey the laws. As long as I do not think of my functions as Readers, as long as I do not have the right intuition, the vision of it, it is as useful to me as the first proof of the Four Colour Theorem. On the other hand, how can I be sure this is the only way of defining a monad on functions? There are several Monoids on Num, at least two Applicatives on Lists − would it not be too reckless to consider a function a Reader monad alone?
The predicament does not end here. Once I go searching for an answer, I stumble upon an even more puzzling note: Reader happens to be the hom functor now, or even a representable functor in general. And the folks from the other end of the spectrum actually know ahead of time that there would be such a construct in Haskell, and even spell its type, the same as it is spelled in the aforementioned official explanations. Now, this is way over my head, but I can parse the definition of hom functor from Mac Lane. With some imagination, it can be seen that, granted a function a -> b as a morphism in the (supposed) category Hask, we may compose it with id to obtain... a function a -> b again, this time as an element of the set hom(a, b).
Does this connect in any way with some of these morphisms being partially applied? Or with the use of Reader as option store in stack? Can I actually be shown the object and arrow functions of the hom functor Hask -> Set? (I shall take an endofunctor Hask -> Hask as a reasonable approximation.) Would that be my trusty fellows pure and fmap?
And, well, how do I actually use Reader, after that?

I can't answer all of your questions, but let's start with the easy ones:
(->) r (why not just write (r ->)?)
Because the latter is a syntax error in Haskell. You can't use this section syntax on types.
... claiming that it is a partially applied function.
That's not what it's saying. The quote is:
the partially applied function type (->) r is a simple reader monad
It's a partially applied type, not a partially applied function. Or in other words, it's parsed like this: ((partially applied) (function type))
The type constructor for function types in Haskell is spelled ->.
A fully applied function type looks like r -> a (or equivalently (->) r a), where r is the argument type and a the result type.
Thus (->) r is a partial application of -> (the function type).
If we ignore monad transformers and ReaderT, the straightforward definition of Reader is:
newtype Reader r a = Reader (r -> a)
runReader :: Reader r a -> r -> a
runReader (Reader f) x = f x
-- or rather:
runReader :: Reader r a -> (r -> a)
runReader (Reader f) = f
or equivalently:
newtype Reader r a = Reader{ runReader :: r -> a }
That is, Reader is just a newtype for -> (a very thin wrapper), with runReader for unwrapping.
You can even make -> an instance of MonadReader just by copying the instance for Reader and removing all the Reader / runReader wrapping/unwrapping.

Related

What is the difference between functor and monad Intuitively

I have learned the definitions of functor and monad, But I am still unable to figure out the difference between them except for the definitions. I have read some answers to this problem, In What is the difference between a Functor and a Monad? one comment say
A functor takes a pure function (and a functorial value) whereas a monad takes a Kleisli arrow, i.e. a function that returns a monad (and a monadic value). Hence you can chain two monads and the second monad can depend on the result of the previous one. You cannot do this with functors.
This comment is interesting and gives me a read of their difference. But I still have some questions.
why functor cannot use the result of previous one? since fmap :: (a -> b) -> f a -> f b, when I currying fmap with a pure function, I can get a f a -> f b function,f b depends on f a, Does the result mean data inside the functor?
In category theory I can understand the comment since I cannot get the element in category theory, but In Haskell I find out that I can use the result of functor since Haskell can remember the data constructor, Does Haskell prevent me from understanding this comment? Should I understand this in pure category theory?
When using fmap :: Functor f => (a -> b) -> f a -> f b, the argument function can never depend on any of the structure from the functor f itself; how can it, when all it ever gets passed are a values, nothing to do with f? When you use fmap to get an f a -> f b function, the code that is responsible for building the final f b value is the code of fmap itself, not the code of the a -> b function being mapped. The function being mapped might not even be called at all (e.g. if you're using the Maybe functor and the f a value is Nothing, there are no a values inside that for fmap to pass to the a -> b function, and it will never be called).
But fmap is fully polymorphic in a and b, meaning the code of fmap doesn't know what those types are and cannot assume anything about them. So fmap has to build the final f b value in a way that only depends on the functor structure added by f; the code of fmap cannot look at any a values and make decisions about what to do. In fact really the only thing a lawful fmap can do is return a value with exactly the same structure as the input f a value, only with any as that were inside replaced by the b value returned by the a -> b function. That's why you can just deriving Functor any data type; there's at most one way to make any given data type a Functor, and the compiler can just do it for you.
But the monad =<< function1 has this type: Monad m => (a -> m b) -> m a -> m b. Here the mapped function has type a -> m b. That means the final m b structure returned at the end isn't purely determined by the code of =<< (at least not necessarily). The a -> m b function knows what monad is being used and returns some monadic structure in its m b value, not just a raw b. The mapped function still doesn't receive any monadic structure, so its intermediate m b result can't depend on that; any dependency the final m b value has on the monadic structure in the original m a has to be determined by the code for =<< (which again, cannot make any decisions based on a values itself). But the mapped function (if it is called) does get to contribute some monadic structure, and that can depend on the a values (because the mapped function doesn't have to be completely polymorphic in a; it can inspect a values and make decisions based on them).
This means that ultimately any part of the final output m b might depend on any part of the input (if we don't know what function was mapped, or how the =<< function works internally). This is quite different from the functor case, where even without knowing anything about the specific functor or the mapped function we know that the final output will always look like "a copy of the input with as replaced by bs" (and specifically which b replaces each a is determined by the mapped function).
It's possibly worth noting: almost everything I've said here is a direct consequence of what these types mean:
Functor f => (a -> b) -> f a -> f b
Monad m => (a -> m a) -> m a -> m b
These aren't special properties of functors and monads you have to remember, it's just a consequence of those operations having those types. (The functor and monad laws express things that the types don't, but I've not needed to invoke those to explain this difference)
Once you are really deeply used to thinking about types the Haskell way, you can figure this out for yourself just by looking at the types. I didn't of course; I had it explained to me (a dozen times) when I was learning, and I've remembered it since then. But now I can figure out similar things about new APIs I've never seen before, without any tutorials or explanations.
1 I'm using the reversed =<< operator rather than the traditional >>= operator merely because it lines up better with the fmap / <$> operator.
A monad is a functor, or rather a functor along with two additional operations. (The original(?) name for a monad was a triple.)
It helps to look at the original mathematical definition, which provided an operation unit :: Monad m => a -> m a, but instead of (>>=) :: Monad m => m a -> (a -> m b) -> m b, defined an operation called join :: Monad m => m (m a) -> m a, which lets you "remove" a layer of lifting. The two formulations are equivalent, as join can be defined in terms of (>>=)
join ma = ma >>= id
and vice versa:
ma >>= f = join $ fmap f ma
The last definition gives a different interpretation of a monad: it's a functor that you can "compress"; chaining comes for free by mapping a function over an already lifted value, then combining the two layers of lifting back to a single layer.
unit is required in either case because there's no way to get a lifted value in the first place with just fmap.

Any advantage of using type constructors in type classes?

Take for example the class Functor:
class Functor a
instance Functor Maybe
Here Maybe is a type constructor.
But we can do this in two other ways:
Firstly, using multi-parameter type classes:
class MultiFunctor a e
instance MultiFunctor (Maybe a) a
Secondly using type families:
class MonoFunctor a
instance MonoFunctor (Maybe a)
type family Element
type instance Element (Maybe a) a
Now there's one obvious advantage of the two latter methods, namely that it allows us to do things like this:
instance Text Char
Or:
instance Text
type instance Element Text Char
So we can work with monomorphic containers.
The second advantage is that we can make instances of types that don't have the type parameter as the final parameter. Lets say we make an Either style type but put the types the wrong way around:
data Silly t errorT = Silly t errorT
instance Functor Silly -- oh no we can't do this without a newtype wrapper
Whereas
instance MultiFunctor (Silly t errorT) t
works fine and
instance MonoFunctor (Silly t errorT)
type instance Element (Silly t errorT) t
is also good.
Given these flexibility advantages of only using complete types (not type signatures) in type class definitions, is there any reason to use the original style definition, assuming you're using GHC and don't mind using the extensions? That is, is there anything special you can do putting a type constructor, not just a full type in a type class that you can't do with multi-parameter type classes or type families?
Your proposals ignore some rather important details about the existing Functor definition because you didn't work through the details of writing out what would happen with the class's member function.
class MultiFunctor a e where
mfmap :: (e -> ??) -> a -> ????
instance MultiFunctor (Maybe a) a where
mfmap = ???????
An important property of fmap at the moment is that its first argument can change types. fmap show :: (Functor f, Show a) => f a -> f String. You can't just throw that away, or you lose most of the value of fmap. So really, MultiFunctor would need to look more like...
class MultiFunctor s t a b | s -> a, t -> b, s b -> t, t a -> s where
mfmap :: (a -> b) -> s -> t
instance (a ~ c, b ~ d) => MultiFunctor (Maybe a) (Maybe b) c d where
mfmap _ Nothing = Nothing
mfmap f (Just a) = Just (f a)
Note just how incredibly complicated this has become to try to make inference at least close to possible. All the functional dependencies are in place to allow instance selection without annotating types all over the place. (I may have missed a couple possible functional dependencies in there!) The instance itself grew some crazy type equality constraints to allow instance selection to be more reliable. And the worst part is - this still has worse properties for reasoning than fmap does.
Supposing my previous instance didn't exist, I could write an instance like this:
instance MultiFunctor (Maybe Int) (Maybe Int) Int Int where
mfmap _ Nothing = Nothing
mfmap f (Just a) = Just (if f a == a then a else f a * 2)
This is broken, of course - but it's broken in a new way that wasn't even possible before. A really important part of the definition of Functor is that the types a and b in fmap don't appear anywhere in the instance definition. Just looking at the class is enough to tell the programmer that the behavior of fmap cannot depend on the types a and b. You get that guarantee for free. You don't need to trust that instances were written correctly.
Because fmap gives you that guarantee for free, you don't even need to check both Functor laws when defining an instance. It's sufficient to check the law fmap id x == x. The second law comes along for free when the first law is proven. But with that broken mfmap I just provided, mfmap id x == x is true, even though the second law is not.
As the implementer of mfmap, you have more work to do to prove your implementation is correct. As a user of it, you have to put more trust in the implementation's correctness, since the type system can't guarantee as much.
If you work out more complete examples for the other systems, you find that they have just as many issues if you want to support the full functionality of fmap. And this is why they aren't really used. They add a lot of complexity for only a small gain in utility.
Well, for one thing the traditional functor class is just much simpler. That alone is a valid reason to prefer it, even though this is Haskell and not Python. And it also represents the mathematical idea better of what a functor is supposed to be: a mapping from objects to objects (f :: *->*), with extra property (->Constraint) that each (forall (a::*) (b::*)) morphism (a->b) is lifted to a morphism on the corresponding object mapped to (-> f a->f b). None of that can be seen very clearly in the * -> * -> Constraint version of the class, or its TypeFamilies equivalent.
On a more practical account, yes, there are also things you can only do with the (*->*)->Constraint version.
In particular, what this constraint guarantees you right away is that all Haskell types are valid objects you can put into the functor, whereas for MultiFunctor you need to check every possible contained type, one by one. Sometimes that's just not possible (or is it?), like when you're mapping over infinitely many types:
data Tough f a = Doable (f a)
| Tough (f (Tough f (a, a)))
instance (Applicative f) = Semigroup (Tough f a) where
Doable x <> Doable y = Tough . Doable $ (,)<$>x<*>y
Tough xs <> Tough ys = Tough $ xs <> ys
-- The following actually violates the semigroup associativity law. Hardly matters here I suppose...
xs <> Doable y = xs <> Tough (Doable $ fmap twice y)
Doable x <> ys = Tough (Doable $ fmap twice x) <> ys
twice x = (x,x)
Note that this uses the Applicative instance of f not just on the a type, but also on arbitrary tuples thereof. I can't see how you could express that with a MultiParamTypeClasses- or TypeFamilies-based applicative class. (It might be possible if you make Tough a suitable GADT, but without that... probably not.)
BTW, this example is perhaps not as useless as it may look – it basically expresses read-only vectors of length 2n in a monadic state.
The expanded variant is indeed more flexible. It was used e.g. by Oleg Kiselyov to define restricted monads. Roughly, you can have
class MN2 m a where
ret2 :: a -> m a
class (MN2 m a, MN2 m b) => MN3 m a b where
bind2 :: m a -> (a -> m b) -> m b
allowing monad instances to be parametrized over a and b. This is useful because you can restrict those types to members of some other class:
import Data.Set as Set
instance MN2 Set.Set a where
-- does not require Ord
return = Set.singleton
instance Prelude.Ord b => MN3 SMPlus a b where
-- Set.union requires Ord
m >>= f = Set.fold (Set.union . f) Set.empty m
Note than because of that Ord constraint, we are unable to define Monad Set.Set using unrestricted monads. Indeed, the monad class requires the monad to be usable at all types.
Also see: parameterized (indexed) monad.

Is Haskell designed to encourage Hungarian Notation?

I'm learning Haskell and started noticing common suffixes in functions like:
debugM
mapM_
mapCE
Which is known as Hungarian Notation. But at the same time I can use type classes to write non-ambiguous code like:
show
return
Since functions like map are so common and used in many contexts, why not let type checker to pick correct polymorphic version of map, fmap, mapM, mapM_ or mapCE?
There's a little bit of "Hungarian notation", but it's quite different. In short, Haskell's type system removes the need for most of it.
The map/mapM thing is a neat example. These two functions confer the exact same concept, but cannot be polymorphically represented because abstracting over the difference would be really noisy. So we pick a Hungarian notation instead.
To be clear, the two types are
map :: (a -> b) -> ([a] -> [b])
mapM :: Monad m => (a -> m b) -> ([a] -> m [b])
These look similar, all mapM does is add the monad, but not the same. The structure is revealed when you make the following synonyms
type Arr a b = a -> b
type Klei m a b = a -> m b
and rewrite the types as
map :: Arr a b -> Arr [a] [b]
mapM :: Monad m => Klei m a b -> Klei m [a] [b]
The thing to note is that Arr and Monad m => Klei m are often extremely similar things. They both form a certain structure known as a "category" which allows us to hoist all kinds of computation inside of it. [0]
What we'd like is to abstract over the choice of category with something like
class Mapping cat where
map :: cat a b -> cat [a] [b]
instance Mapping (->) where map = Prelude.map
instance Monad m => Mapping (Klei m) where map = mapM -- in spirit anyway
but it turns out that there is way more to be gained by abstracting over the list part with Functor [1]
class Functor f where
map :: (a -> b) -> (f a -> f b)
instance Functor [] where
map = Prelude.map
instance Functor Maybe where
map Nothing = Nothing
map (Just a) = Just (f a)
and so for simplicity's sake, we use Hungarian notation to make the difference of category instead of rolling it up into Haskell's polymorphism functionality.
[0] Notably, the fact that Klei m is a category implies m is a monad and the category laws become exactly the monad laws. In particular, that's my favorite way for remembering what the monad laws are.
[1] Technically, the sole method of Functor is called fmap not map, but it could and perhaps should have been called just map. The f was added so that the type signature for map remains simple (specialized to lists) and thus is a little less intimidating to beginners. Whether or not that was the right decision is a debate that continues today.
Your assumption is that all of these do roughly the same thing - they don't. map and fmap are pretty much the same function - map is just fmap specialized to the [] functor (either for historical reasons, or so that beginners would get less confusing type errors - I'm not sure).
mapM and mapM_ on the other hand are like map followed by sequence or sequence_ respectively - while what they're doing may look related, they're doing different things. Incidentally, the function that behaves like fmap for monads is... fmap (which is also aliased with a specialized signature to liftM, for historical reasons), as Monads are, by definition, also Functors; note that this is, right now, not enforced by the standard library - a historical oversight that should be corrected with GHC 7.10 if I'm not mistaken.
I don't know what to tell you about debugM and mapCE as I haven't seen these before.

What can Arrows do that Monads can't?

Arrows seem to be gaining popularity in the Haskell community, but it seems to me like Monads are more powerful. What is gained by using Arrows? Why can't Monads be used instead?
Every monad gives rise to an arrow
newtype Kleisli m a b = Kleisli (a -> m b)
instance Monad m => Category (Kleisli m) where
id = Kleisli return
(Kleisli f) . (Kleisli g) = Kleisli (\x -> (g x) >>= f)
instance Monad m => Arrow (Kleisli m) where
arr f = Kleisli (return . f)
first (Kleisli f) = Kleisli (\(a,b) -> (f a) >>= \fa -> return (fa,b))
But, there are arrows which are not monads. Thus, there are arrows which do things that you can't do with monads. A good example is the arrow transformer to add some static information
data StaticT m c a b = StaticT m (c a b)
instance (Category c, Monoid m) => Category (StaticT m c) where
id = StaticT mempty id
(StaticT m1 f) . (StaticT m2 g) = StaticT (m1 <> m2) (f . g)
instance (Arrow c, Monoid m) => Arrow (StaticT m c) where
arr f = StaticT mempty (arr f)
first (StaticT m f) = StaticT m (first f)
this arrow tranformer is usefull because it can be used to keep track of static properties of a program. For example, you can use this to instrument your API to statically measure how many calls you are making.
I've always found it difficult to think of the issue in these terms: what is gained by using arrows. As other commenters have mentioned, every monad can trivially be turned into an arrow. So a monad can do all the arrow-y things. However, we can make Arrows that are not monads. That is to say, we can make types that can do these arrow-y things without making them support monadic binding. It might not seem like the case, but the monadic bind function is actually a pretty restrictive (hence powerful) operation that disqualifies many types.
See, to support bind, you have to be able to assert that that regardless of the input type, what's going to come out is going to be wrapped in the monad.
(>>=) :: forall a b. m a -> (a -> m b) -> m b
But, how would we define bind for a type like data Foo a = F Bool a Surely, we could combine one Foo's a with another's but how would we combine the Bools. Imagine that the Bool marked, say, whether or not the value of the other parameter had changed. If I have a = Foo False whatever and I bind it into a function, I have no idea whether or not that function is going to change whatever. I can't write a bind that correctly sets the Bool. This is often called the problem of static meta-information. I cannot inspect the function being bound into to determine whether or not it will alter whatever.
There are several other cases like this: types that represent mutating functions, parsers that can exit early, etc. But the basic idea is this: monads set a high bar that not all types can clear. Arrows allow you to compose types (that may or may not be able to support this high, binding standard) in powerful ways without having to satisfy bind. Of course, you do lose some of the power of monads.
Moral of the story: there's nothing an arrow can do that monad cannot, because a monad can always be made into an arrow. However, sometimes you can't make your types into monads but you still want to allow them to have most of the compositional flexibility and power of monads.
Many of these ideas were inspired by the superb Understanding Haskell Arrows (backup)
Well, I'm going to cheat slightly here by changing the question from Arrow to Applicative. A lot of the same motives apply, and I know applicatives better than arrows. (And in fact, every Arrow is also an Applicative but not vice-versa, so I'm just taking it down a bit further down the slope to Functor.)
Just like every Monad is an Arrow, every Monad is also an Applicative. There are Applicatives that are not Monads (e.g., ZipList), so that's one possible answer.
But assume we're dealing with a type that admits of a Monad instance as well as an Applicative. Why might we sometime use the Applicative instance instead of Monad? Because Applicative is less powerful, and that comes with benefits:
There are things that we know that the Monad can do which the Applicative cannot. For example, if we use the Applicative instance of IO to assemble a compound action from simpler ones, none of the actions we compose may use the results of any of the others. All that applicative IO can do is execute the component actions and combine their results with pure functions.
Applicative types can be written so that we can do powerful static analysis of the actions before executing them. So you can write a program that inspects an Applicative action before executing it, figures out what it's going to do, and uses that to improve performance, tell the user what's going to be done, etc.
As an example of the first, I've been working on designing a kind of OLAP calculation language using Applicatives. The type admits of a Monad instance, but I've deliberately avoided having that, because I want the queries to be less powerful than what Monad would allow. Applicative means that each calculation will bottom out to a predictable number of queries.
As an example of the latter, I'll use a toy example from my still-under-development operational Applicative library. If you write the Reader monad as an operational Applicative program instead, you can examine the resulting Readers to count how many times they use the ask operation:
{-# LANGUAGE GADTs, RankNTypes, ScopedTypeVariables #-}
import Control.Applicative.Operational
-- | A 'Reader' is an 'Applicative' program that uses the 'ReaderI'
-- instruction set.
type Reader r a = ProgramAp (ReaderI r) a
-- | The only 'Reader' instruction is 'Ask', which requires both the
-- environment and result type to be #r#.
data ReaderI r a where
Ask :: ReaderI r r
ask :: Reader r r
ask = singleton Ask
-- | We run a 'Reader' by translating each instruction in the instruction set
-- into an #r -> a# function. In the case of 'Ask' the translation is 'id'.
runReader :: forall r a. Reader r a -> r -> a
runReader = interpretAp evalI
where evalI :: forall x. ReaderI r x -> r -> x
evalI Ask = id
-- | Count how many times a 'Reader' uses the 'Ask' instruction. The 'viewAp'
-- function translates a 'ProgramAp' into a syntax tree that we can inspect.
countAsk :: forall r a. Reader r a -> Int
countAsk = count . viewAp
where count :: forall x. ProgramViewAp (ReaderI r) x -> Int
-- Pure :: a -> ProgamViewAp instruction a
count (Pure _) = 0
-- (:<**>) :: instruction a
-- -> ProgramViewAp instruction (a -> b)
-- -> ProgramViewAp instruction b
count (Ask :<**> k) = succ (count k)
As best as I understand, you can't write countAsk if you implement Reader as a monad. (My understanding comes from asking right here in Stack Overflow, I'll add.)
This same motive is actually one of the ideas behind Arrows. One of the big motivating examples for Arrow was a parser combinator design that uses "static information" to get better performance than monadic parsers. What they mean by "static information" is more or less the same as in my Reader example: it's possible to write an Arrow instance where the parsers can be inspected very much like my Readers can. Then the parsing library can, before executing a parser, inspect it to see if it can predict ahead of time that it will fail, and skip it in that case.
In one of the direct comments to your question, jberryman mentions that arrows may in fact be losing popularity. I'd add that as I see it, Applicative is what arrows are losing popularity to.
References:
Paolo Capriotti & Ambrus Kaposi, "Free Applicative Functors". Very highly recommended.
Gergo Erdi, "Static analysis with Applicatives". Inspirational, but I it hard to follow...
The question isn't quite right. It's like asking why would you eat oranges instead of apples, since apples seem more nutritious all around.
Arrows, like monads, are a way of expressing computations, but they have to obey a different set of laws. In particular, the laws tend to make arrows nicer to use when you have function-like things.
The Haskell Wiki lists a few introductions to arrows. In particular, the Wikibook is a nice high level introduction, and the tutorial by John Hughes is a good overview of the various kinds of arrows.
For a real world example, compare this tutorial which uses Hakyll 3's arrow-based interface, with roughly the same thing in Hakyll 4's monad-based interface.
I always found one of the really practical use cases of arrows to be stream programming.
Look at this:
data Stream a = Stream a (Stream a)
data SF a b = SF (a -> (b, SF a b))
SF a b is a synchronous stream function.
You can define a function from it that transforms Stream a into Stream b that never hangs and always outputs one b for one a:
(<<$>>) :: SF a b -> Stream a -> Stream b
SF f <<$>> Stream a as = let (b, sf') = f a
in Stream b $ sf' <<$>> as
There is an Arrow instance for SF. In particular, you can compose SFs:
(>>>) :: SF a b -> SF b c -> SF a c
Now try to do this in monads. It doesn't work well. You might say that Stream a == Reader Nat a and thus it's a monad, but the monad instance is very inefficient. Imagine the type of join:
join :: Stream (Stream a) -> Stream a
You have to extract the diagonal from a stream of streams. This means O(n) complexity for the nth element, but using the Arrow instance for SFs gives you O(1) in principle! (And also deals with time and space leaks.)

Monad theory and Haskell

Most tutorials seem to give a lot of examples of monads (IO, state, list and so on) and then expect the reader to be able to abstract the overall principle and then they mention category theory. I don't tend to learn very well by trying generalise from examples and I would like to understand from a theoretical point of view why this pattern is so important.
Judging from this thread:
Can anyone explain Monads?
this is a common problem, and I've tried looking at most of the tutorials suggested (except the Brian Beck videos which won't play on my linux machine):
Does anyone know of a tutorial that starts from category theory and explains IO, state, list monads in those terms? the following is my unsuccessful attempt to do so:
As I understand it a monad consists of a triple: an endo-functor and two natural transformations.
The functor is usually shown with the type:
(a -> b) -> (m a -> m b)
I included the second bracket just to emphasise the symmetry.
But, this is an endofunctor, so shouldn't the domain and codomain be the same like this?:
(a -> b) -> (a -> b)
I think the answer is that the domain and codomain both have a type of:
(a -> b) | (m a -> m b) | (m m a -> m m b) and so on ...
But I'm not really sure if that works or fits in with the definition of the functor given?
When we move on to the natural transformation it gets even worse. If I understand correctly a natural transformation is a second order functor (with certain rules) that is a functor from one functor to another one. So since we have defined the functor above the general type of the natural transformations would be:
((a -> b) -> (m a -> m b)) -> ((a -> b) -> (m a -> m b))
But the actual natural transformations we are using have type:
a -> m a
m a -> (a ->m b) -> m b
Are these subsets of the general form above? and why are they natural transformations?
Martin
A quick disclaimer: I'm a little shaky on category theory in general, while I get the impression you have at least some familiarity with it. Hopefully I won't make too much of a hash of this...
Does anyone know of a tutorial that starts from category theory and explains IO, state, list monads in those terms?
First of all, ignore IO for now, it's full of dark magic. It works as a model of imperative computations for the same reasons that State works for modelling stateful computations, but unlike the latter IO is a black box with no way to deduce the monadic structure from the outside.
The functor is usually shown with the type: (a -> b) -> (m a -> m b) I included the second bracket just to emphasise the symmetry.
But, this is an endofunctor, so shouldn't the domain and codomain be the same like this?:
I suspect you are misinterpreting how type variables in Haskell relate to the category theory concepts.
First of all, yes, that specifies an endofunctor, on the category of Haskell types. A type variable such as a is not anything in this category, however; it's a variable that is (implicitly) universally quantified over all objects in the category. Thus, the type (a -> b) -> (a -> b) describes only endofunctors that map every object to itself.
Type constructors describe an injective function on objects, where the elements of the constructor's codomain cannot be described by any means except as an application of the type constructor. Even if two type constructors produce isomorphic results, the resulting types remain distinct. Note that type constructors are not, in the general case, functors.
The type variable m in the functor signature, then, represents a one-argument type constructor. Out of context this would normally be read as universal quantification, but that's incorrect in this case since no such function can exist. Rather, the type class definition binds m and allows the definition of such functions for specific type constructors.
The resulting function, then, says that for any type constructor m which has fmap defined, for any two objects a and b and a morphism between them, we can find a morphism between the types given by applying m to a and b.
Note that while the above does, of course, define an endofunctor on Hask, it is not even remotely general enough to describe all such endofunctors.
But the actual natural transformations we are using have type:
a -> m a
m a -> (a ->m b) -> m b
Are these subsets of the general form above? and why are they natural transformations?
Well, no, they aren't. A natural transformation is roughly a function (not a functor) between functors. The two natural transformations that specify a monad M look like I -> M where I is the identity functor, and M ∘ M -> M where ∘ is functor composition. In Haskell, we have no good way of working directly with either a true identity functor or with functor composition. Instead, we discard the identity functor to get just (Functor m) => a -> m a for the first, and write out nested type constructor application as (Functor m) => m (m a) -> m a for the second.
The first of these is obviously return; the second is a function called join, which is not part of the type class. However, join can be written in terms of (>>=), and the latter is more often useful in day-to-day programming.
As far as specific monads go, if you want a more mathematical description, here's a quick sketch of an example:
For some fixed type S, consider two functors F and G where F(x) = (S, x) and G(x) = S -> x (It should hopefully be obvious that these are indeed valid functors).
These functors are also adjoints; consider natural transformations unit :: x -> G (F x) and counit :: F (G x) -> x. Expanding the definitions gives us unit :: x -> (S -> (S, x)) and counit :: (S, S -> x) -> x. The types suggest uncurried function application and tuple construction; feel free to verify that those work as expected.
An adjunction gives rise to a monad by composition of the functors, so taking G ∘ F and expanding the definition, we get G (F x) = S -> (S, x), which is the definition of the State monad. The unit for the adjunction is of course return; and you should be able to use counit to define join.
This page does exactly that. I think your main confusion is that the class doesn't really make the Type a functor, but it defines a functor from the category of Haskell types into the category of that type.
Following the notation of the link, assuming F is a Haskell Functor, it means that there is a functor from the category of Hask to the category of F.
Roughly speaking, Haskell does its category theory all in just one category, whose objects are Haskell types and whose arrows are functions between these types. It's definitely not a general-purpose language for modelling category theory.
A (mathematical) functor is an operation turning things in one category into things in another, possibly entirely different, category. An endofunctor is then a functor which happens to have the same source and target categories. In Haskell, a functor is an operation turning things in the category of Haskell types into other things also in the category of Haskell types, so it is always an endofunctor.
[If you're following the mathematical literature, technically, the operation '(a->b)->(m a -> m b)' is just the arrow part of the endofunctor m, and 'm' is the object part]
When Haskellers talk about working 'in a monad' they really mean working in the Kleisli category of the monad. The Kleisli category of a monad is a thoroughly confusing beast at first, and normally needs at least two colours of ink to give a good explanation, so take the following attempt for what it is and check out some references (unfortunately Wikipedia is useless here for all but the straight definitions).
Suppose you have a monad 'm' on the category C of Haskell types. Its Kleisli category Kl(m) has the same objects as C, namely Haskell types, but an arrow a ~(f)~> b in Kl(m) is an arrow a -(f)-> mb in C. (I've used a squiggly line in my Kleisli arrow to distinguish the two). To reiterate: the objects and arrows of the Kl(C) are also objects and arrows of C but the arrows point to different objects in Kl(C) than in C. If this doesn't strike you as odd, read it again more carefully!
Concretely, consider the Maybe monad. Its Kleisli category is just the collection of Haskell types, and its arrows a ~(f)~> b are functions a -(f)-> Maybe b. Or consider the (State s) monad whose arrows a ~(f)~> b are functions a -(f)-> (State s b) == a -(f)-> (s->(s,b)). In any case, you're always writing a squiggly arrow as a shorthand for doing something to the type of the codomain of your functions.
[Note that State is not a monad, because the kind of State is * -> * -> *, so you need to supply one of the type parameters to turn it into a mathematical monad.]
So far so good, hopefully, but suppose you want to compose arrows a ~(f)~> b and b ~(g)~> c. These are really Haskell functions a -(f)-> mb and b -(g)-> mc which you cannot compose because the types don't match. The mathematical solution is to use the 'multiplication' natural transformation u:mm->m of the monad as follows: a ~(f)~> b ~(g)~> c == a -(f)-> mb -(mg)-> mmc -(u_c)-> mc to get an arrow a->mc which is a Kleisli arrow a ~(f;g)~> c as required.
Perhaps a concrete example helps here. In the Maybe monad, you cannot compose functions f : a -> Maybe b and g : b -> Maybe c directly, but by lifting g to
Maybe_g :: Maybe b -> Maybe (Maybe c)
Maybe_g Nothing = Nothing
Maybe_g (Just a) = Just (g a)
and using the 'obvious'
u :: Maybe (Maybe c) -> Maybe c
u Nothing = Nothing
u (Just Nothing) = Nothing
u (Just (Just c)) = Just c
you can form the composition u . Maybe_g . f which is the function a -> Maybe c that you wanted.
In the (State s) monad, it's similar but messier: Given two monadic functions a ~(f)~> b and b ~(g)~> c which are really a -(f)-> (s->(s,b)) and b -(g)-> (s->(s,c)) under the hood, you compose them by lifting g into
State_s_g :: (s->(s,b)) -> (s->(s,(s->(s,c))))
State_s_g p s1 = let (s2, b) = p s1 in (s2, g b)
then you apply the 'multiplication' natural transformation u, which is
u :: (s->(s,(s->(s,c)))) -> (s->(s,c))
u p1 s1 = let (s2, p2) = p1 s1 in p2 s2
which (sort of) plugs the final state of f into the initial state of g.
In Haskell, this turns out to be a bit of an unnatural way to work so instead there's the (>>=) function which basically does the same thing as u but in a way that makes it easier to implement and use. This is important: (>>=) is not the natural transformation 'u'. You can define each in terms of the other, so they're equivalent, but they're not the same thing. The Haskell version of 'u' is written join.
The other thing missing from this definition of Kleisli categories is the identity on each object: a ~(1_a)~> a which is really a -(n_a)-> ma where n is the 'unit' natural transformation. This is written return in Haskell, and doesn't seem to cause as much confusion.
I learned category theory before I came to Haskell, and I too have had difficulty with the mismatch between what mathematicians call a monad and what they look like in Haskell. It's no easier from the other direction!
Not sure I understand what was the question but yes, you are right, monad in Haskell is defined as a triple:
m :: * -> * -- this is endofunctor from haskell types to haskell types!
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
but common definition from category theory is another triple:
m :: * -> *
return :: a -> m a
join :: m (m a) -> m a
It is slightly confusing but it's not so hard to show that these two definitions are equal.
To do that we need to define join in terms of (>>=) (and vice versa).
First step:
join :: m (m a) -> m a
join x = ?
This gives us x :: m (m a).
All we can do with something that have type m _ is to aply (>>=) to it:
(x >>=) :: (m a -> m b) -> m b
Now we need something as a second argument for (>>=), and also,
from the type of join we have constraint (x >>= y) :: ma.
So y here will have type y :: ma -> ma and id :: a -> a fits it very well:
join x = x >>= id
The other way
(>>=) :: ma -> (a -> mb) -> m b
(>>=) x y = ?
Where x :: m a and y :: a -> m b.
To get m b from x and y we need something of type a.
Unfortunately, we can't extract a from m a. But we can substitute it for something else (remember, monad is a functor also):
fmap :: (a -> b) -> m a -> m b
fmap y x :: m (m b)
And it's perfectly fits as argument for join: (>>=) x y = join (fmap y x).
The best way to look at monads and computational effects is to start with where Haskell got the notion of monads for computational effects from, and then look at Haskell after you understand that. See this paper in particular: Notions of Computation and Monads, by E. Moggi.
See also Moggi's earlier paper which shows how monads work for the lambda calculus alone: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.26.2787
The fact that monads capture substitution, among other things (http://blog.sigfpe.com/2009/12/where-do-monads-come-from.html), and substitution is key to the lambda calculus, should give a good clue as to why they have so much expressive power.
While monads originally came from category theory, this doesn't mean that category theory is the only abstract context in which you can view them. A different viewpoint is given by operational semantics. For an introduction, have a look at my Operational Monad Tutorial.
One way to look at IO is to consider it as a strange kind of state monad. Remember that the state monad looks like:
data State s a = State (s -> (s, a))
where the "s" argument is the data type you want to thread through the computation. Also, this version of "State" doesn't have "get" and "put" actions and we don't export the constructor.
Now imagine a type
data RealWorld = RealWorld ......
This has no real definition, but notionally a value of type RealWorld holds the state of the entire universe outside the computer. Of course we can never have a value of type RealWorld, but you can imagine something like:
getChar :: RealWorld -> (RealWorld, Char)
In other words the "getChar" function takes a state of the universe before the keyboard button has been pressed, and returns the key pressed plus the state of the universe after the key has been pressed. Of course the problem is that the previous state of the world is still available to be referenced, which can't happen in reality.
But now we write this:
type IO = State RealWorld
getChar :: IO Char
Notionally, all we have done is wrap the previous version of "getChar" as a state action. But by doing this we can no longer access the "RealWorld" values because they are wrapped up inside the State monad.
So when a Haskell program wants to change a lightbulb it takes hold of the bulb and applies a "rotate" function to the RealWorld value inside IO.
For me, so far, the explanation that comes closest to tie together monads in category theory and monads in Haskell is that monads are a monid whose objects have the type a->m b. I can see that these objects are very close to an endofunctor and so the composition of such functions are related to an imperative sequence of program statements. Also functions which return IO functions are valid in pure functional code until the inner function is called from outside.
This id element is 'a -> m a' which fits in very well but the multiplication element is function composition which should be:
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> (a -> m c)
This is not quite function composition, but close enough (I think to get true function composition we need a complementary function which turns m b back into a, then we get function composition if we apply these in pairs?), I'm not quite sure how to get from that to this:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
I've got a feeling I may have seen an explanation of this in all the stuff that I read, without understanding its significance the first time through, so I will do some re-reading to try to (re)find an explanation of this.
The other thing I would like to do is link together all the different category theory explanations: endofunctor+2 natural transformations, Kleisli category, a monoid whose objects are monoids and so on. For me the thing that seems to link all these explanations is that they are two level. That is, normally we treat category objects as black-boxes where we imply their properties from their outside interactions, but here there seems to be a need to go one level inside the objects to see what’s going on? We can explain monads without this but only if we accept apparently arbitrary constructions.
Martin
See this question: is chaining operations the only thing that the monad class solves?
In it, I explain my vision that we must differentiate between the Monad class and individual types that solve individual problems. The Monad class, by itself, only solve the important problem of "chaining operations with choice" and mades this solution available to types being instance of it (by means of "inheritance").
On the other hand, if a given type that solves a given problem faces the problem of "chaining operations with choice" then, it should be made an instance (inherit) of the Monad class.
The fact is that problems not get solved merely by being a Monad. It would be like saying that "wheels" solve many problems, but actually "wheels" only solve a problem, and things with wheels solve many different problems.

Resources