Two well-known examples of applicatives are monads and ziplists. Are there any other examples?
From Time flies like an applicative functor by Conor McBride:
Structure cops will note that De is another example of an applicative functor which is not a monad — join would bring things from the far future to the near future, and that had better not be possible. However, where applicative functors in general only pull through traversable functors (containers with finitely many elements), De pulls through all containers. So it’s a bit special. I wonder what it is.
and
The De functor represents a fixed delay, rather than an arbitrary one. I’m dividing time into discrete slices. De x is the type of an x due at the next slice. De (De x) is thus the type of an x due in two slices’ time, and you can’t make it turn up any sooner!
Read the whole post. To answer the immediate question, the author’s conclusion is
Don’t Look!
OK, here’s the implementation. It’s a con.
newtype De x = De x deriving Show -- ssh, don't tell!
instance Functor De where
fmap f (De x) = De (f x)
instance Applicative De where
pure = De
De f <*> De s = De (f s)
fix :: (De x -> x) -> x
fix f = f (De (fix f))
I recently defined an Applicative instance for a newtype on top of (,,,), a "quad". (The standard library defines an instance for (,), but not (,,,). That's OK, as the standard implementation has different semantics than what I was after.)
The background is; I am parsing some old data, and the date format in the data is ambiguous. Each date in the data can be parsed into four possibilities, stored in the quad. I then want to validate each date in the quad to eliminate semantically invalid dates. (There are no months with 32 days, there is no month 34, there is no 5th quarter, etc.) Finally, I want to take each date in the dataset, and reduce the entire set to a quad representing which date formats are valid for the entire set. Then, I choose the best format out of those options, and assume that's what the date format of the dataset is.
This entire operation is very easy to express as applicative operations on the quad structure.
Here's the basic shape of the code:
My new type:
newtype DQ a = DQ (a, a, a, a) -- date quad
deriving ...
instance Functor DQ where
g `fmap` f = pure g <*> f
instance Applicative DQ where
pure x = DQ (x, x, x, x)
DQ (g, h, i, j) <*> DQ (a, b, c, d) = DQ (g a, h b, i c, j d)
Some prerequisite "pure" functions:
parseDateInt :: Int -> DQ Date
validateDate :: Date -> Bool
extractBestDate :: DQ Date -> DQ Bool -> Date
So once we have the quad of parsed dates (from parseDateInt), we need to validate them:
validateDates :: DQ Date -> DQ Bool
validateDates = (validateDate <$>)
(This is only a Functor so far, but you could also write
(pure validateDate <*>).
It's also worth noting the symmetry between validation of a single
element, and validating each element of the set -- to validate one, you might write
validateDate $ date; to validate a set, you write
validateDate <$> dates. This is why fmap is written as <$>, it's function application to a functor.)
The step after this is to take a set of valid parses and fold that
into a final result:
intuitDateType :: [DQ Bool] -> DQ Bool
intuitDateType dates = foldl1 (liftA2 (&&)) dates
So now you can go from the [Int] in the data file to a DQ Bool
representing the possibly-valid date representations for a dataset.
(And from there, associate each data point with a real date object,
instead of the flaky Int that was supplied.)
So anyway, this post has gotten a bit long, but the idea is that an
Applicative instance allowed me to solve my problem in about 3 lines
of code. My problem domain was repeatedly applying functions to data in a
container, which is what an Applicative functor does. There is no join operation on this data, so a Monad instance would not make much sense.
Conal Elliott writes about signal processors and how they're applicatives. Those are similar to ZipList in nature, where each respective pair of items in the two "containers" gets combined.
I've been using this concept a lot in an unfinished yet cute game I made (cabal install DefendTheKing to check it out).
Code snippet/example of Applicative-style usage:
draw font
<$> lstP gABoard
<*> lstP gASelection
<*> mouseMotion
<*> lstP gASide
<*> lstP gAGameIteration
Formlets are an abstraction over HTML forms described in terms of composing applicatives. A formlet applicative is the result of composing a name generating applicative (to generate form elements names), a XML writer applicative (to generate HTML), and an environment applicative (to supply the submitted form values).
Formlets can be extended by composing additional applicatives, for example to implement validation.
Cooper, Wadler et al show in the papers that formlets cannot be represented as monads.
Formlets have been implemented in Haskell, here is the package.
Swierstra and Duponcheel defined an efficient style of parser, this parser was pretty much the early poster child for Arrows, but it doesn't need anything from Arrow that it can't get from Applicative. However, Applicatives hadn't been coined at the time.
Effectively it computes 'FIRST' sets for an LL(1) parser and uses that to do smarter branch selection. However, you can't compute these sets when you work monadically.
This is perhaps not a terribly fair example, because the Swierstra/Duponcheel parser admits a blending of static and dynamic parsers, and it is only the static parser that is limited to being applicative.
With observable sharing you can carry their parser design farther and also compute 'FOLLOW' sets (as long as you are careful to not build an infinite context free grammar.). This yields nice asymptotic guarantees for parsing context free grammars, which are not available to you when parsing using monadic (context sensitive) parser designs.
Also interesting perhaps, is to consider the structures for which the <*> of applicative is available, but not the pure. Many comonads admit a (<*>)-like definition, which respects the structure of the comonad, but don't have a reasonable definition for 'pure'. My semigroupoids package and the myriad packages that depend upon it explore this idea further.
I believe arrows are applicative functors. There is certainly a WrapArrow type in Control.Applicative.
McBride and Paterson http://www.soi.city.ac.uk/~ross/papers/Applicative.pdf show that a monoid can be treated as an applicative functor, but in general case it is not a monad.
Related
So I understand that a free object is defined as being the left-hand side of an adjunction. But how does that lead you to the Haskell definition of such objects?
More concretely: given a "forgetful functor" from the category of monads to the category of endofunctors,
newtype Forget m a = Forget (m a)
instance Monad m => Functor (Forget m) where
fmap f (Forget x) = Forget (liftM f x)
then the free monad Free :: (* -> *) -> (* -> *) is a type admitting (a Monad instance and) the following isomorphism:
type f ~> g = forall x. f x -> g x
fwd :: (Functor f, Monad m) => (f ~> Forget m) -> (Free f ~> m)
bwd :: (Functor f, Monad m) => (Free f ~> m) -> (f ~> Forget m)
fwd . bwd = id = bwd . fwd
If we drop the Forgets, for the free monad in Control.Monad.Free we have fwd = foldFree and bwd = (. liftF) (I think?)
But how do these laws lead to the construction found in Control.Monad.Free? How do you come up with data Free f a = Return a | Free (f (Free f a))? Surely you don't just guess until you come up with something that satisfies the laws? Same question goes for the free category of a graph, the free monoid of a set, and any other free object you care to name.
I don't think the notion of "free" is as well-defined as you seem to believe. While I do think the general consensus is that it is indeed a left adjoint of a forgetful functor, the issue lies in what "forgetful" means. There are clear definitions in some broad-ranging cases, particularly for concrete categories.
Universal algebra provides a broad ranging approach which covers almost all "algebraic" structures (over sets). The upshot is given a "signature" which consists of sorts, operations, and equations, you build a term algebra (i.e. an AST) of the operations and then quotient it by the equivalence relation generated by the equations. This is the free algebra generated from that signature. For example, we usually talk about monoids as being a set equipped with an associative multiplication and unit. In code, the free algebra before quotienting would be:
data PreFreeMonoid a
= Unit
| Var a
| Mul (PreFreeMonid a) (PreFreeMonoid a)
We would then quotient by the equivalence relation generated from the equations:
Mul Unit x = x
Mul x Unit = x
Mul (Mul x y) z = Mul x (Mul y z)
But you can show that the resulting quotient type is isomorphic to lists. In the multi-sorted case, we'd have a family of term algebras, one for each sort.
One way to recast this categorically is to use the notion of a (slightly generalized) Lawvere theory. Given a signature with a set of sorts, S, we can build a small category, call it T, whose objects are lists of elements of S. These small categories will be called theories in general. Operations get mapped to arrows whose source and target correspond to the appropriate arities. We freely add "tupling" and "projection" arrows so that e.g. [A,B,A] becomes the product [A]×[B]×[A]. Finally, we add commutative diagrams (i.e. equations between arrows) correspond to each equation in the signature. At this point, T essentially represents the term algebra(s). In fact, an actual interpretation or model of this term algebra is just a finite product preserving functor T → Set, write Mod(T) for the category of finite product preserving functors from T → Set. In the single sorted case, we'd have an underlying set functor, but in general we get a S-indexed family of sets, i.e. we have a functor U : Mod(T) → SetS where we're viewing S as a discrete category here. U is simply U(m)(s) = m([s]). We can actually calculate the left adjoint. First, we have a family of sets indexed by elements of S, call it G. Then we need to build a finite product preserving functor T → Set, but any functor into Set (i.e. copresheaf) is a colimit of representables which, in this case, means it's a quotient of the following (dependent) sum type:
Free(G)(s) = Σt:T.T(t,s)×Free(G)(t)
If Free(G) is finite product preserving then in the t = [A,B] case, for example, we'd have:
T([A,B],s)×Free(G)([A,B]) = T([A,B],s)×Free(G)([A])×Free(G)([B])
and we simply define Free(G)([A]) = G(A) for each A in S producing:
T([A,B],s)×Free(G)([A])×Free(G)([B]) = T([A,B],s)×G(A)×G(B)
Altogether this says that, an element of Free(G)([A]) consists of an arrow of T into [A] and a list of elements of the appropriate sets corresponding to the source of that arrow, i.e. the arity of the term modulo equations that make it behave sensibly and obey the equations from the signature but which I'm not going to elaborate on. For the multiplication of a monoid, we'd have an arrow m : [A,A] → [A] and this would lead to a tuples (m, x, y) where x and y are elements of G(A) corresponding to an term like m(x, y). Recasting this definition of as a recursive one takes looking at the equations we're quotienting by.
There are other things to verify to show that Free ⊣ U but it isn't too hard. Once that's done, U∘Free is a monad on SetS.
The nice thing about the Lawvere theory approach is that it is easy to generalize in multiple ways. One straightforward way is to replace Set by some other topos E. It's actually the case that the category of directed multigraphs form a topos, but I don't believe you can (easily) view categories as theories over Graph. A different direction to extend Lawvere theories is to consider doctrines other than finite product preserving functors, in particular finite limit preserving functors aka left exact or lex functors is an interesting point. Both small categories and directed multigraphs (which categorists sometimes call quivers) can be viewed as models of a category with finite limits. There's a straightforward inclusion of the theory of directed multigraphs into the theory of small categories. This, contravariantly, induces a functor cat → Graph simply by precomposition. The left adjoint of this is then (almost) the left Kan extensions along that inclusion. These left Kan extensions will occur in Set so ultimately they are just colimits which are just quotients of (dependent) sum types. (Technically, you need to verify that the resulting Kan extensions are finite limit preserving. We're also helped by the fact that the models of the theory of graphs are essentially arbitrary functors from the theory of graphs. This happens because the theory of graphs consists only of unary operations.)
None of this helps for free monads though. However, it turns out that one construction subsumes all of these including free monads. Returning to universal algebra, it's the case that every signature with no equations gives rise to a (polynomial) functor whose initial algebra is the free term algebra. Lambek's lemma suggests and it's easy to prove that the initial algebra is just colimit of repeated applications of the functor. The above general result is based on a similar approach and the relevant case for free monads is the unpointed endofunctor case, in which you start to see the definition of Free that you gave, but actually working it out fully requires unfolding many constructions.
Frankly, though, what I'm pretty sure actually happened in the FP world is the following. If you look at PreFreeMonoid, it's actually a free monad. PreFreeMonoid Void is the initial algebra for the functor the monoid signature (minus the equations) would give rise to. If you are familiar with using functors for initial algebras and you even start thinking about universal algebra, you are almost certainly going to end up defining a type like data Term f a = Var a | Op (f (Term f a)). It's easy to verify this is a monad once you think to ask the question. If you're even vaguely familiar with the relationship monads have to algebraic structures or to term substitution, then you may ask the question quite quickly. The same construction can be stumbled upon from a programming language implementation perspective. If you just directly set your goal to be deriving the free monad construction in Haskell, there are several intuitive ways to arrive at the right definition especially combined with some equational/parametricity-driven reasoning. In fact, the "monoid object in the category of endofunctors" one is quite suggestive.
('really wish this StackExchange had MathJax support.)
I think I have rough idea of what the free monad is, but I would like to have a better way to visualize it.
It makes sense that free magmas are just binary trees because that's as "general" as you can be without losing any information.
Similarly, it makes sense that free monoids are just lists, because the order of operations doesn't matter. There is now a redundancy in the "binary tree", so you can just flatten it, if that makes sense.
It makes sense that free groups kind of look like fractals, for a similar reason: https://en.wikipedia.org/wiki/Cayley_graph#/media/File:Cayley_graph_of_F2.svg
and to get other groups, we just identify different elements of the group as being the "same" and we get other groups.
How should I be visualizing the free monad? Right now, I just think of it as the most general abstract syntax tree that you can imagine. Is that essentially it? Or am I oversimplifying it?
Also, similarly, what do we lose in going from a free monad to a list or other monads? What are we identifying to be the "same"?
I appreciate all comments that shed light into this. Thanks!
Right now, I just think of [the free monad] as the most general abstract syntax tree that you can imagine. Is that essentially it? Or am I oversimplifying it?
You're oversimplifying it:
"Free monad" is short for "the free monad over a specific functor" or the Free f a data type, which in reality is a different free monad for each choice of f.
There is no one general structure that all free monads have. Their structure breaks down into:
What is contributed by Free itself
What is contributed by different choices for f
But let's take a different approach. I learned free monads by first studying the closely related operational monad instead, which has a more uniform, easier-to-visualize structure. I highly recommend you study that from the link itself.
The simplest way to define the operational monad is like this:
{-# LANGUAGE GADTs #-}
data Program instr a where
Return :: a -> Program instr a
Bind :: instr x -- an "instruction" with result type `x`
-> (x -> Program instr a) -- function that computes rest of program
-> Program instr a -- a program with result type `a`
...where the instr type parameter represents the "instruction" type of the monad, usually a GADT. For example (taken from the link):
data StackInstruction a where
Pop :: StackInstruction Int
Push :: Int -> StackInstruction ()
So a Program in the operational monad, informally, I'd visualize it as a "dynamic list" of instructions, where the result produced by the execution of any instruction is used as input to the function that decides what the "tail" of the "instruction list" is. The Bind constructor pairs an instruction with a "tail chooser" function.
Many free monads can also be visualized in similar terms—you can say that the functor chosen for a given a free monad serves as its "instruction set." But with free monads the "tails" or "children" of the "instruction" are managed by the Functor itself. So a simple example (taken from Gabriel González's popular blog entry on the topic):
data Free f r = Free (f (Free f r)) | Pure r
-- The `next` parameter represents the "tails" of the computation.
data Toy b next =
Output b next
| Bell next
| Done
instance Functor (Toy b) where
fmap f (Output b next) = Output b (f next)
fmap f (Bell next) = Bell (f next)
fmap _ Done = Done
While in the operational monad the function used to generate the "tail" belongs to the Program type (in the Bind constructor), in free monads the tails belong to the "instruction"/Functor type. This allows the free monad's "instructions" (an analogy that is now breaking down) to have a single "tail" (like Output or Bell), zero tails (like Done) or multiple tails if you so chose to. Or, in another common pattern, the next parameter can be the result type of an embedded function:
data Terminal a next =
PutStrLn String next
| GetLine (String -> next) -- can't access the next "instruction" unless
-- you supply a `String`.
instance Functor Terminal where
fmap f (PutStrLn str next) = PutStrLn str (f next)
fmap f (GetLine g) = GetLine (fmap f g)
This, incidentally, is an objection I've long had to people who refer to free or operational monads as "syntax trees"—practical use of them requires that "children" of a node be opaquely hidden inside a function. You generally can't fully inspect this "tree"!
So really, when you get down to it, how to visualize a free monad comes down entirely to the structure of the Functor that you use to parametrize it. Some look like lists, some look like trees, and some look like "opaque trees" with functions as nodes. (Somebody once responded to my objection above with a line like "a function is a tree node with as many children as there are possible arguments.")
You may have heard
Monad is a monoid in a category of endofunctors
And you mentioned already that monoids are just lists. So there you are.
Expanding on that a bit:
data Free f a = Pure a
| Free (f (Free f a))
It's not a normal list of a, but a list where tail is wrapped inside f. You'll see it if you write a structure of value of multiple nested binds:
pure x >>= f >>= g >>= h :: Free m a
might result into
Free $ m1 $ Free $ m2 $ Free $ m3 $ Pure x
where m1, m2, m3 :: a -> m a -- Some underlying functor "constructors"
If m in example above is sum type:
data Sum a = Inl a | Inr a
deriving Functor
Then the list is actually a tree, as at each constructor we can branch left or right.
You may have heard that
Applicative is a monoid in a category of endofunctors
... the category is just different. There are nice visualisations of different free applicative encodings in Roman Cheplyaka's blog post.
So free Applicative is also a list. I imagine it as a heterogenous list of f a values, and single function:
x :: FreeA f a
x = FreeA g [ s, t, u, v]
where g :: b -> c -> d -> e -> a
s :: f b
t :: f c
u :: f d
v :: f e
In this case the the tail itself isn't wrapped in f, but each element separately. This might or might not help understand the difference between Applicative and Monad.
Note, that f doesn't need to be Functor to make Applicative (FreeA f a), controrary to Free monad above.
There is also free Functor
data Coyoneda f a = Coyoneda :: (b -> a) -> f b -> Coyoneda f a
which makes any * -> * type Functor. Compare it with free Applicative above.
In applicative case we had a heterogenous list of length n of f a values and a n-ary function combining them.
Coyoneda is 1-ary special case of above.
We can combine Coyoneda and Free to make Operational free monad. And as other answer mentions, that one is hardy imaginable as tree, as there is functions involved. OTOH you can imagine those continuations as different, magical arrows in your picture :)
How to combine the state monad S -> (A, S) with the costate comonad (E->A, E)?
I tried with both obvious combinations S -> ((E->A, E), S) and (E->S->(A, S), E) but then in either case I do not know how to define the operations (return, extract, ... and so on) for the combination.
Combining two monads O and I yields a monad if either O or I is copointed, i.e. have an extract method. Each comonad is copointed. If both O and I` are copointed, then you have two different "natural" ways to obtain a monad which are presumably not equivalent.
You have:
unit_O :: a -> O a
join_O :: O (O a) -> O a
unit_I :: a -> I a
join_I :: I (I a) -> I a
Here I've added _O and _I suffixed for clarity; in actual Haskell code, they would not be there, since the type checker figures this out on its own.
Your goal is to show that O (I O (I a))) is a monad. Let's assume that O is copointed, i.e. that there is a function extract_O :: O a -> a.
Then we have:
unit :: a -> O (I a)
unit = unit_O . unit_I
join :: O (I (O (I a))) -> O (I a)
The problem, of course, is in implementing join. We follow this strategy:
fmap over the outer O
use extract_O to get ride of the inner O
use join_I to combine the two I monads
This leads us to
join = fmap_O $ join_I . fmap_I extract
To make this work, you'll also need to define
newtype MCompose O I a = MCompose O (I a)
and add the respective type constructors and deconstructors into the definitions above.
The other alternative uses extract_I instead of extract_O. This version is even simpler:
join = join_O . fmap_O extract_I
This defines a new monad. I assume you can define a new comonad in the same way, but I haven't attempted this.
As the other answer demonstrates, both of the combinations S -> ((E->A, E), S) and (E->S->(A, S), E)
have Monad and Comonad instances simultaneously. In fact giving a Monad/Comonad instance is
equivalent to giving a
monoid structure to resp. its points ∀r.r->f(r) or its copoints ∀r.f(r)->r, at least in classical,
non-constructive sense (I don't know the constructive answer). This fact suggests that actually a
Functor f has a very good chance that it can
be both Monad and Comonad, provided its points and copoints are non-trivial.
The real question, however, is whether the Monad/Comonad instances constructed as such do have natural
computational/categorical meanings. In this particular case I would say "no", because you don't seem to have
a priori knowledge about how to compose them in a way that suit your computational needs.
The standard categorical way to compose two (co)monads is via adjunctions. Let me summarize your situation:
Fₑ Fₛ
--> -->
Hask ⊣ Hask ⊣ Hask
<-- <--
Gₑ Gₛ
Fₜ(a) = (a,t)
Gₜ(a) = (t->a)
Proof of Fₜ ⊣ Gₜ:
Fₜ(x) -> y ≃
(x,t) -> y ≃
x -> (t->y) ≃
x -> Gₜ(y)
Now you can see that the state monad (s->(a,s)) ≃ (s->a,s->s) is the composition GₛFₛ and the costate comonad
is FₑGₑ. This adjunction says that Hask can be interpreted as a model of the (co)state (co)algebras.
Now, 'adjunctions compose.' For example,
FₛFₑ(x) -> y ≃
Fₑ(x) -> Gₛ(y) ≃
x -> GₑGₛ(y)
So FₛFₑ ⊣ GₑGₛ. This gives a pair of a monad and a comonad, namely
T(a) = GₑGₛFₛFₑ(a)
= GₑGₛFₛ(a,e)
= GₑGₛ(a,e,s)
= Gₑ(s->(a,e,s))
= e->s->(a,e,s)
= ((e,s)->a, (e,s)->(e,s))
G(a) = FₛFₑGₑGₛ(a)
= FₛFₑGₑ(s->a)
= FₛFₑ(e->s->a)
= Fₛ(e->s->a,e)
= (e->s->a,e,s)
= ((e,s)->a, (e,s))
T is simply the state monad with the state (e,s), G is the costate comonad with the costate (e,s), so
these do have very natural meanings.
Composing adjunctions is a natural, frequent mathematical operation. For example, a geometric morphism
between topoi (kind of Cartesian Closed Categories which admit complex (non-free) constructions at the 'type level') is defined as a pair of adjunctions, only requiring its left adjoint to be left exact (i.e. preserves finite limits). If those topoi are sheaves on topological spaces,
composing the adjunctions simply corresponds to composing (unique) continuous base change maps (in the opposite direction), having a very natural meaning.
On the other hand, composing monads/comonads directly seems to be a very rare practice in Mathematics.
This is because often a (co)monad is thought of as a carrier of an (co)algebraic theory, rather than as a
model. In this interpretation the corresponding adjunctions are the models, not the monad. The problem
is that composing two theories requires another theory, a theory about how to compose them. For example,
imagine composing two theories of monoids. Then you may get at least two new theories,
namely the theory of lists of lists, or ring-like algebras where two kinds of binary operations distribute.
Neither is a priori better/more natural than the other. This is the meaning of "monads don't compose"; it doesn't say the composition cannot be a monad, but it does say you will need another theory how to compose them.
In contrast, composing adjunctions naturally results in another adjunction simply because by doing so you are
implicity specifying the rules of composing two given theories. So by taking the monad of the composed adjunction you get the theory that also specifies the rules of composition.
I'm trying to figure out what exactly are the rules for deriving Functor in Haskell.
I've seen message postings about it, and I've seen test code about it, but I can't seem to find official documentation about what the rules are. Can someone please clarify and/or point me to the right place?
To use deriving Functor you must enable the DeriveFunctor language pragma and apply it to a polymorphic type which has a covariant final type variable---in other words, a type which admits a valid Functor instance. It'll then derive the "obvious" Functor instance.
There's been some concern in the past that the derived instance is not as efficient as a hand coded one is, though I cannot seem to find that material.
The algorithm itself was, as far as I could find, first proposed by Twan Van Laarhoven in 2007 and makes heavy use of Generic Haskell programming.
The code that actually does the deed is, unfortunately, a bit on the hairy side. I believe that's largely because earlier, simpler code would sometimes lead to excessive compilation times. Twan van Laarhoven came up with the current code to address this.
The derived Functor instance always does the obvious thing. This is usually just fine, but occasionally misses opportunities. For example, suppose I write
data Pair a = Pair a a deriving Functor
data Digit a = One a | Two a a deriving Functor
data Queue a =
Empty
| Single a
| Deep !(Digit a) (Queue (Pair a)) !(Digit a) deriving Functor
This will generate (in GHC 8.2)
instance Functor Queue where
fmap ...
x <$ Empty = Empty
x <$ Single y = Single x
x <$ Deep pr m sf = Deep (x <$ pr) (fmap (x <$) m) (x <$ sf)
It's possible to write that last case much better by hand:
x <$ Deep pr m sf = Deep (x <$ pr) (Pair x x <$ m) (x <$ sf)
You can see the actual derived code using -ddump-deriv.
I've been reading about monads in category theory. One definition of monads uses a pair of adjoint functors. A monad is defined by a round-trip using those functors. Apparently adjunctions are very important in category theory, but I haven't seen any explanation of Haskell monads in terms of adjoint functors. Has anyone given it a thought?
Edit: Just for fun, I'm going to do this right. Original answer preserved below
The current adjunction code for category-extras now is in the adjunctions package: http://hackage.haskell.org/package/adjunctions
I'm just going to work through the state monad explicitly and simply. This code uses Data.Functor.Compose from the transformers package, but is otherwise self-contained.
An adjunction between f (D -> C) and g (C -> D), written f -| g, can be characterized in a number of ways. We'll use the counit/unit (epsilon/eta) description, which gives two natural transformations (morphisms between functors).
class (Functor f, Functor g) => Adjoint f g where
counit :: f (g a) -> a
unit :: a -> g (f a)
Note that the "a" in counit is really the identity functor in C, and the "a" in unit is really the identity functor in D.
We can also recover the hom-set adjunction definition from the counit/unit definition.
phiLeft :: Adjoint f g => (f a -> b) -> (a -> g b)
phiLeft f = fmap f . unit
phiRight :: Adjoint f g => (a -> g b) -> (f a -> b)
phiRight f = counit . fmap f
In any case, we can now define a Monad from our unit/counit adjunction like so:
instance Adjoint f g => Monad (Compose g f) where
return x = Compose $ unit x
x >>= f = Compose . fmap counit . getCompose $ fmap (getCompose . f) x
Now we can implement the classic adjunction between (a,) and (a ->):
instance Adjoint ((,) a) ((->) a) where
-- counit :: (a,a -> b) -> b
counit (x, f) = f x
-- unit :: b -> (a -> (a,b))
unit x = \y -> (y, x)
And now a type synonym
type State s = Compose ((->) s) ((,) s)
And if we load this up in ghci, we can confirm that State is precisely our classic state monad. Note that we can take the opposite composition and get the Costate Comonad (aka the store comonad).
There are a bunch of other adjunctions we can make into monads in this fashion (such as (Bool,) Pair), but they're sort of strange monads. Unfortunately we can't do the adjunctions that induce Reader and Writer directly in Haskell in a pleasant way. We can do Cont, but as copumpkin describes, that requires an adjunction from an opposite category, so it actually uses a different "form" of the "Adjoint" typeclass that reverses some arrows. That form is also implemented in a different module in the adjunctions package.
this material is covered in a different way by Derek Elkins' article in The Monad Reader 13 -- Calculating Monads with Category Theory: http://www.haskell.org/wikiupload/8/85/TMR-Issue13.pdf
Also, Hinze's recent Kan Extensions for Program Optimization paper walks through the construction of the list monad from the adjunction between Mon and Set: http://www.cs.ox.ac.uk/ralf.hinze/Kan.pdf
Old answer:
Two references.
1) Category-extras delivers, as as always, with a representation of adjunctions and how monads arise from them. As usual, it's good to think with, but pretty light on documentation: http://hackage.haskell.org/packages/archive/category-extras/0.53.5/doc/html/Control-Functor-Adjunction.html
2) -Cafe also delivers with a promising but brief discussion on the role of adjunction. Some of which may help in interpreting category-extras: http://www.haskell.org/pipermail/haskell-cafe/2007-December/036328.html
Derek Elkins was showing me recently over dinner how the Cont Monad arises from composing the (_ -> k) contravariant functor with itself, since it happens to be self-adjoint. That's how you get (a -> k) -> k out of it. Its counit, however, leads to double negation elimination, which can't be written in Haskell.
For some Agda code that illustrates and proves this, please see http://hpaste.org/68257.
This is an old thread, but I found the question interesting,
so I did some calculations myself. Hopefully Bartosz is still there
and might read this..
In fact, the Eilenberg-Moore construction does give a very clear picture in this case.
(I will use CWM notation with Haskell like syntax)
Let T be the list monad < T,eta,mu > (eta = return and mu = concat)
and consider a T-algebra h:T a -> a.
(Note that T a = [a] is a free monoid <[a],[],(++)>, that is, identity [] and multiplication (++).)
By definition, h must satisfy h.T h == h.mu a and h.eta a== id.
Now, some easy diagram chasing proves that h actually induces a monoid structure on a (defined by x*y = h[x,y] ),
and that h becomes a monoid homomorphism for this structure.
Conversely, any monoid structure < a,a0,* > defined in Haskell is naturally defined as a T-algebra.
In this way (h = foldr ( * ) a0, a function that 'replaces' (:) with (*),and maps [] to a0, the identity).
So, in this case, the category of T-algebras is just the category of monoid structures definable in Haskell, HaskMon.
(Please check that the morphisms in T-algebras are actually monoid homomorphisms.)
It also characterizes lists as universal objects in HaskMon, just like free products in Grp, polynomial rings in CRng, etc.
The adjuction corresponding to the above construction is < F,G,eta,epsilon >
where
F:Hask -> HaskMon, which takes a type a to the 'free monoid generated by a',that is, [a],
G:HaskMon -> Hask, the forgetful functor (forget the multiplication),
eta:1 -> GF , the natural transformation defined by \x::a -> [x],
epsilon: FG -> 1 , the natural transformation defined by the folding function above
(the 'canonical surjection' from a free monoid to its quotient monoid)
Next, there is another 'Kleisli category' and the corresponding adjunction.
You can check that it is just the category of Haskell types with morphisms a -> T b,
where its compositions are given by the so-called 'Kleisli composition' (>=>).
A typical Haskell programmer will find this category more familiar.
Finally,as is illustrated in CWM, the category of T-algebras
(resp. Kleisli category) becomes the terminal (resp. initial) object in the category
of adjuctions that define the list monad T in a suitable sense.
I suggest to do a similar calculations for the binary tree functor T a = L a | B (T a) (T a) to check your understanding.
I've found a standard constructions of adjunct functors for any monad by Eilenberg-Moore, but I'm not sure if it adds any insight to the problem. The second category in the construction is a category of T-algebras. A T algebra adds a "product" to the initial category.
So how would it work for a list monad? The functor in the list monad consists of a type constructor, e.g., Int->[Int] and a mapping of functions (e.g., standard application of map to lists). An algebra adds a mapping from lists to elements. One example would be adding (or multiplying) all the elements of a list of integers. The functor F takes any type, e.g., Int, and maps it into the algebra defined on the lists of Int, where the product is defined by monadic join (or vice versa, join is defined as the product). The forgetful functor G takes an algebra and forgets the product. The pair F, G, of adjoint functors is then used to construct the monad in the usual way.
I must say I'm none the wiser.
If you are interested,here's some thoughts of a non-expert
on the role of monads and adjunctions in programming languages:
First of all, there exists for a given monad T a unique adjunction to the Kleisli category of T.
In Haskell,the use of monads is primarily confined to operations in this category
(which is essentially a category of free algebras,no quotients).
In fact, all one can do with a Haskell Monad is to compose some Kleisli morphisms of
type a->T b through the use of do expressions, (>>=), etc., to create a new
morphism. In this context, the role of monads is restricted to just the economy
of notation.One exploits associativity of morphisms to be able to write (say) [0,1,2]
instead of (Cons 0 (Cons 1 (Cons 2 Nil))), that is, you can write sequence as sequence,
not as a tree.
Even the use of IO monads is non essential, for the current Haskell type system is powerful
enough to realize data encapsulation (existential types).
This is my answer to your original question,
but I'm curious what Haskell experts have to say about this.
On the other hand, as we have noted, there's also a 1-1 correspondence between monads and
adjunctions to (T-)algebras. Adjoints, in MacLane's terms, are 'a way
to express equivalences of categories.'
In a typical setting of adjunctions <F,G>:X->A where F is some sort
of 'free algebra generator' and G a 'forgetful functor',the corresponding monad
will (through the use of T-algebras) describe how (and when) the algebraic structure of A is constructed on the objects of X.
In the case of Hask and the list monad T, the structure which T introduces is that
of monoid,and this can help us to establish properties (including the correctness) of code through algebraic
methods that the theory of monoids provides. For example, the function foldr (*) e::[a]->a can
readily be seen as an associative operation as long as <a,(*),e> is a monoid,
a fact which could be exploited by the compiler to optimize the computation (e.g. by parallelism).
Another application is to identify and classify 'recursion patterns' in functional programming using categorical
methods in the hope to (partially) dispose of 'the goto of functional programming', Y (the arbitrary recursion combinator).
Apparently, this kind of applications is one of the primary motivations of the creators of Category Theory (MacLane, Eilenberg, etc.),
namely, to establish natural equivalence of categories, and transfer a well-known method in one category
to another (e.g. homological methods to topological spaces,algebraic methods to programming, etc.).
Here, adjoints and monads are indispensable tools to exploit this connection of categories.
(Incidentally, the notion of monads (and its dual, comonads) is so general that one can even go so far as to define 'cohomologies' of
Haskell types.But I have not given a thought yet.)
As for non-determistic functions you mentioned, I have much less to say...
But note that; if an adjunction <F,G>:Hask->A for some category A defines the list monad T,
there must be a unique 'comparison functor' K:A->MonHask (the category of monoids definable in Haskell), see CWM.
This means, in effect, that your category of interest must be a category of monoids in some restricted form (e.g. it may lack some quotients but not free algebras) in order to define the list monad.
Finally,some remarks:
The binary tree functor I mentioned in my last posting easily generalizes to arbitrary data type
T a1 .. an = T1 T11 .. T1m | ....
Namely,any data type in Haskell naturally defines a monad (together with the corresponding category of algebras and the Kleisli category),
which is just the result of any data constructor in Haskell being total.
This is another reason why I consider Haskell's Monad class is not much more than a syntax sugar
(which is pretty important in practice,of course).