Computation Constructs (Monads, Arrows, etc.) - haskell

I have become rather interested in how computation is modeled in Haskell. Several resources have described monads as "composable computation" and arrows as "abstract views of computation". I've never seen monoids, functors or applicative functors described in this way. It seems that they lack the necessary structure.
I find that idea interesting and wonder if there are any other constructs that do something similar. If so, what are some resources that I can use to acquaint myself with them? Are there any packages on Hackage that might come in handy?
Note: This question is similar to
Monads vs. Arrows and https://stackoverflow.com/questions/2395715/resources-for-learning-monads-functors-monoids-arrows-etc, but I am looking for constructs beyond funtors, applicative functors, monads, and arrows.
Edit: I concede that applicative functors should be considered "computational constructs", but I'm really looking for something I haven't come across yet. This includes applicative functors, monads and arrows.

Arrows are generalized by Categories, and so by the Category typeclass.
class Category f where
(.) :: f a b -> f b c -> f a c
id :: f a a
The Arrow typeclass definition has Category as a superclass. Categories (in the haskell sense) generalize functions (you can compose them but not apply them) and so are definitely a "model of computation". Arrow provides a Category with additional structure for working with tuples. So, while Category mirrors something about Haskell's function space, Arrow extends that to something about product types.
Every Monad gives rise to something called a "Kleisli Category" and this construction gives you instances of ArrowApply. You can build a Monad out of any ArrowApply such that going full circle doesn't change your behavior, so in some deep sense Monad and ArrowApply are the same thing.
newtype Kleisli m a b = Kleisli { runKleisli :: a -> m b }
instance Monad m => Category (Kleisli m) where
id = Kleisli return
(Kleisli f) . (Kleisli g) = Kleisli (\b -> g b >>= f)
instance Monad m => Arrow (Kleisli m) where
arr f = Kleisli (return . f)
first (Kleisli f) = Kleisli (\ ~(b,d) -> f b >>= \c -> return (c,d))
second (Kleisli f) = Kleisli (\ ~(d,b) -> f b >>= \c -> return (d,c))
Actually every Arrow gives rise to an Applicative (universally quantified to get the kinds right) in addition to the Category superclass, and I believe the combination of the appropriate Category and Applicative is enough to reconstruct your Arrow.
So, these structures are deeply connected.
Warning: wishy-washy commentary ahead. One central difference between the Functor/Applicative/Monad way of thinking and the Category/Arrow way of thinking is that while Functor and its ilk are generalizations at the level of object (types in Haskell), Category/Arrow are generelazation of the notion of morphism (functions in Haskell). My belief is that thinking at the level of generalized morphism involves a higher level of abstraction than thinking at the level of generalized objects. Sometimes that is a good thing, other times it is not. On the other-hand, despite the fact that Arrows have a categorical basis, and no one in math thinks Applicative is interesting, it is my understanding that Applicative is generally better understood than Arrow.
Basically you can think of "Category < Arrow < ArrowApply" and "Functor < Applicative < Monad" such that "Category ~ Functor", "Arrow ~ Applicative" and "ArrowApply ~ Monad".
More Concrete Below:
As for other structures to model computation: one can often reverse the direction of the "arrows" (just meaning morphisms here) in categorical constructions to get the "dual" or "co-construction". So, if a monad is defined as
class Functor m => Monad m where
return :: a -> m a
join :: m (m a) -> m a
(okay, I know that isn't how Haskell defines things, but ma >>= f = join $ fmap f ma and join x = x >>= id so it just as well could be)
then the comonad is
class Functor m => Comonad m where
extract :: m a -> a -- this is co-return
duplicate :: m a -> m (m a) -- this is co-join
This thing turns out to be pretty common also. It turns out that Comonad is the basic underlying structure of cellular automata. For completness, I should point out that Edward Kmett's Control.Comonad puts duplicate in a class between functor and Comonad for "Extendable Functors" because you can also define
extend :: (m a -> b) -> m a -> m b -- Looks familiar? this is just the dual of >>=
extend f = fmap f . duplicate
--this is enough
duplicate = extend id
It turns out that all Monads are also "Extendable"
monadDuplicate :: Monad m => m a -> m (m a)
monadDuplicate = return
while all Comonads are "Joinable"
comonadJoin :: Comonad m => m (m a) -> m a
comonadJoin = extract
so these structures are very close together.

All Monads are Arrows (Monad is isomorphic to ArrowApply). In a different way, all Monads are instances of Applicative, where <*> is Control.Monad.ap and *> is >>. Applicative is weaker because it does not guarantee the >>= operation. Thus Applicative captures computations that do not examine previous results and branch on values. In retrospect much monadic code is actually applicative, and with a clean rewrite this would happen.
Extending monads, with recent Constraint kinds in GHC 7.4.1 there can now be nicer designs for restricted monads. And there are also people looking at parameterized monads, and of course I include a link to something by Oleg.

In libraries these structures give rise to different type of computations.
For example Applicatives can be used to implement static effects. With that I mean effects, which are defined at forehand. For example when implementing a state machine, rejecting or accepting an input state. They can't be used to manipulate their internal structure in terms of their input.
The type says it all:
<*> :: f (a -> b) -> f a -> f b
It is easy to reason, the structure of f cannot be depend om the input of a. Because a cannot reach f on the type level.
Monads can be used for dynamic effects. This also can be reasoned from the type signature:
>>= :: m a -> (a -> m b) -> m b
How can you see this? Because a is on the same "level" as m. Mathematically it is a two stage process. Bind is a composition of two function: fmap and join. First we use fmap together with the monadic action to create a new structure embedded in the old one:
fmap :: (a -> b) -> m a -> m b
f :: (a -> m b)
m :: m a
fmap f :: m a -> m (m b)
fmap f m :: m (m b)
Fmap can create a new structure, based on the input value. Then we collapse the structure with join, thus we are able to manipulate the structure from within the monadic computation in a way that depends on the input:
join :: m (m a) -> m a
join (fmap f m) :: m b
Many monads are easier to implement with join:
(>>=) = join . fmap
This is possible with monads:
addCounter :: Int -> m Int ()
But not with applicatives, but applicatives (and any monad) can do things like:
addOne :: m Int ()
Arrows give more control over the input and the output types, but for me they really feel similar to applicatives. Maybe I am wrong about that.

Related

What is the 'minimum' needed to make an Applicative a Monad?

The Monad typeclass can be defined in terms of return and (>>=). However, if we already have a Functor instance for some type constructor f, then this definition is sort of 'more than we need' in that (>>=) and return could be used to implement fmap so we're not making use of the Functor instance we assumed.
In contrast, defining return and join seems like a more 'minimal'/less redundant way to make f a Monad. This way, the Functor constraint is essential because fmap cannot be written in terms of these operations. (Note join is not necessarily the only minimal way to go from Functor to Monad: I think (>=>) works as well.)
Similarly, Applicative can be defined in terms of pure and (<*>), but this definition again does not take advantage of the Functor constraint since these operations are enough to define fmap.
However, Applicative f can also be defined using unit :: f () and (>*<) :: f a -> f b -> f (a, b). These operations are not enough to define fmap so I would say in some sense this is a more minimal way to go from Functor to Applicative.
Is there a characterization of Monad as fmap, unit, (>*<), and some other operator which is minimal in that none of these functions can be derived from the others?
(>>=) does not work, since it can implement a >*< b = a >>= (\ x -> b >>= \ y -> pure (x, y)) where pure x = fmap (const x) unit.
Nor does join since m >>= k = join (fmap k m) so (>*<) can be implemented as above.
I suspect (>=>) fails similarly.
I have something, I think. It's far from elegant, but maybe it's enough to get you unstuck, at least. I started with join :: m (m a) -> ??? and asked "what could it produce that would require (<*>) to get back to m a?", which I found a fruitful line of thought that probably has more spoils.
If you introduce a new type T which can only be constructed inside the monad:
t :: m T
Then you could define a join-like operation which requires such a T:
joinT :: m (m a) -> m (T -> a)
The only way we can produce the T we need to get to the sweet, sweet a inside is by using t, and then we have to combine that with the result of joinT somehow. There are two basic operations that can combine two ms into one: (<*>) and joinT -- fmap is no help. joinT is not going to work, because we'll just need yet another T to use its result, so (<*>) is the only option, meaning that (<*>) can't be defined in terms of joinT.
You could roll that all up into an existential, if you prefer.
joinT :: (forall t. m t -> (m (m a) -> m (t -> a)) -> r) -> r

What is the relationship between bind and join?

I got the impression that (>>=) (used by Haskell) and join (preferred by mathematicians) are "equal" since one can write one in terms of the other:
import Control.Monad (join)
join x = x >>= id
x >>= f = join (fmap f x)
Additionally every monad is a functor since bind can be used to replace fmap:
fmap f x = x >>= (return . f)
I have the following questions:
Is there a (non-recursive) definition of fmap in terms of join? (fmap f x = join $ fmap (return . f) x follows from the equations above but is recursive.)
Is "every monad is a functor" a conclusion when using bind (in the definition of a monad), but an assumption when using join?
Is bind more "powerful" than join? And what would "more powerful" mean?
A monad can be either defined in terms of:
return :: a -> m a
bind :: m a -> (a -> m b) -> m b
or alternatively in terms of:
return :: a -> m a
fmap :: (a -> b) -> m a -> m b
join :: m (m a) -> m a
To your questions:
No, we cannot define fmap in terms of join, since otherwise we could remove fmap from the second list above.
No, "every monad is a functor" is a statement about monads in general, regardless whether you define your specific monad in terms of bind or in terms of join and fmap. It is easier to understand the statement if you see the second definition, but that's it.
Yes, bind is more "powerful" than join. It is exactly as "powerful" as join and fmap combined, if you mean with "powerful" that it has the capacity to define a monad (always in combination with return).
For an intuition, see e.g. this answer – bind allows you to combine or chain strategies/plans/computations (that are in a context) together. As an example, let's use the Maybe context (or Maybe monad):
λ: let plusOne x = Just (x + 1)
λ: Just 3 >>= plusOne
Just 4
fmap also let's you chain computations in a context together, but at the cost of increasing the nesting with every step.[1]
λ: fmap plusOne (Just 3)
Just (Just 4)
That's why you need join: to squash two levels of nesting into one. Remember:
join :: m (m a) -> m a
Having only the squashing step doesn't get you very far. You need also fmap to have a monad – and return, which is Just in the example above.
[1]: fmap and (>>=) don't take their two arguments in the same order, but don't let that confuse you.
Is there a [definition] of fmap in terms of join?
No, there isn't. That can be demonstrated by attempting to do it. Suppose we are given an arbitrary type constructor T, and functions:
returnT :: a -> T a
joinT :: T (T a) -> T a
From this data alone, we want to define:
fmapT :: (a -> b) -> T a -> T b
So let's sketch it:
fmapT :: (a -> b) -> T a -> T b
fmapT f ta = tb
where
tb = undefined -- tb :: T b
We need to get a value of type T b somehow. ta :: T a on its own won't do, so we need functions that produce T b values. The only two candidates are joinT and returnT. joinT doesn't help:
fmapT :: (a -> b) -> T a -> T b
fmapT f ta = joinT ttb
where
ttb = undefined -- ttb :: T (T b)
It just kicks the can down the road, as needing a T (T b) value under these circumstances is no improvement.
We might try returnT instead:
fmapT :: (a -> b) -> T a -> T b
fmapT f ta = returnT b
where
b = undefined -- b :: b
Now we need a b value. The only thing that can give us one is f:
fmapT :: (a -> b) -> T a -> T b
fmapT f ta = returnT (f a)
where
a = undefined -- a :: a
And now we are stuck: nothing can give us an a. We have exhausted all available possibilities, so fmapT cannot be defined in such terms.
A digression: it wouldn't suffice to cheat by using a function like this:
extractT :: T a -> a
With an extractT, we might try a = extractT ta, leading to:
fmapT :: (a -> b) -> T a -> T b
fmapT f ta = returnT (f (extractT ta))
It is not enough, however, for fmapT to have the right type: it must also follow the functor laws. In particular, fmapT id = id should hold. With this definition, fmapT id is returnT . extractT, which, in general, is not id (most functors which are instances of both Monad and Comonad serve as examples).
Is "every monad is a functor" a conclusion when using bind (in the definition of a monad), but an assumption when using join?
"Every monad is a functor" is an assumption, or, more precisely, part of the definition of monad. To pick an arbitrary illustration, here is Emily Riehl, Category Theory In Context, p. 154:
Definition 5.1.1. A monad on a category C consists of
an endofunctor T : C → C,
a unit natural transformation η : 1C ⇒ T, and
a multiplication natural transformation μ :T2 ⇒ T,
so that the following diagrams commute in CC: [diagrams of the monad laws]
A monad, therefore, involves an endofunctor by definition. For a Haskell type constructor T that instantiates Monad, the object mapping of that endofunctor is T itself, and the morphism mapping is its fmap. That T will be a Functor instance, and therefore will have an fmap, is, in contemporary Haskell, guaranteed by Applicative (and, by extension, Functor) being a superclass of Monad.
Is that the whole story, though? As far as Haskell is concerned. we know that liftM exists, and also that in a not-so-distant past Functor was not a superclass of Monad. Are those two facts mere Haskellisms? Not quite. In the classic paper Notions of computation and monads, Eugenio Moggi unearths the following definition (p. 3):
Definition 1.2 ([Man76]) A Kleisli triple over a category C is a triple (T, η, _*), where T : Obj(C) → Obj(C), ηA : A → T A for A ∈ Obj(C), f* : T A → T B for f : A → T B and the following equations hold:
ηA* = idT A
ηA; f* = f   for   f : A → T B
f*; g* = (f; g*)*   for   f : A → T B   and   g : B → T C
The important detail here is that T is presented as merely an object mapping in the category C, and not as an endofunctor in C. Working in the Hask category, that amounts to taking a type constructor T without presupposing it is a Functor instance. In code, we might write that as:
class KleisliTriple t where
return :: a -> t a
(=<<) :: (a -> t b) -> t a -> t b
-- (return =<<) = id
-- (f =<<) . return = f
-- (g =<<) . (f =<<) = ((g =<<) . f =<<)
Flipped bind aside, that is the pre-AMP definition of Monad in Haskell. Unsurprisingly, Moggi's paper doesn't take long to show that "there is a one-to-one correspondence between Kleisli triples and monads" (p. 5), establishing along the way that T can be extended to an endofunctor (in Haskell, that step amounts to defining the morphism mapping liftM f m = return . f =<< m, and then showing it follows the functor laws).
All in all, if you write lawful definitions of return and (>>=) without presupposing fmap, you indeed get a lawful implementation of Functor as a consequence. "There is a one-to-one correspondence between Kleisli triples and monads" is a consequence of the definition of Kleisli triple, while "a monad involves an endofunctor" is part of the definition of monad. It is tempting to consider whether it would be more accurate to describe what Haskellers did when writing Monad instances as "setting up a Klesili triple" rather than "setting up a monad", but I will refrain out of fear of getting mired down terminological pedantry -- and in any case, now that Functor is a superclass of Monad there is no practical reason to worry about that.
Is bind more "powerful" than join? And what would "more powerful" mean?
Trick question!
Taken at face value, the answer would be yes, to the extent that, together with return, (>>=) makes it possible to implement fmap (via liftM, as noted above), while join doesn't. However, I don't feel it is worthwhile to insist on this distinction. Why so? Because of the monad laws. Just like it doesn't make sense to talk about a lawful (>>=) without presupposing return, it doesn't make sense to talk about a lawful join without pressuposing return and fmap.
One might get the impression that I am giving too much weight to the laws by using them to tie Monad and Functor in this way. It is true that there are cases of laws that involve two classes, and that only apply to types which instantiate them both. Foldable provides a good example of that: we can find the following law in the Traversable documentation:
The superclass instances should satisfy the following: [...]
In the Foldable instance, foldMap should be equivalent to traversal with a constant applicative functor (foldMapDefault).
That this specific law doesn't always apply is not a problem, because we don't need it to characterise what Foldable is (alternatives include "a Foldable is a container from which we can extract some sequence of elements", and "a Foldable is a container that can be converted to the free monoid on its element type"). With the monad laws, though, it isn't like that: the meaning of the class is inextricably bound to all three of the monad laws.

Understanding differences between functors, applicative functors and monads? [duplicate]

While explaining to someone what a type class X is I struggle to find good examples of data structures which are exactly X.
So, I request examples for:
A type constructor which is not a Functor.
A type constructor which is a Functor, but not Applicative.
A type constructor which is an Applicative, but is not a Monad.
A type constructor which is a Monad.
I think there are plenty examples of Monad everywhere, but a good example of Monad with some relation to previous examples could complete the picture.
I look for examples which would be similar to each other, differing only in aspects important for belonging to the particular type class.
If one could manage to sneak up an example of Arrow somewhere in this hierarchy (is it between Applicative and Monad?), that would be great too!
A type constructor which is not a Functor:
newtype T a = T (a -> Int)
You can make a contravariant functor out of it, but not a (covariant) functor. Try writing fmap and you'll fail. Note that the contravariant functor version is reversed:
fmap :: Functor f => (a -> b) -> f a -> f b
contramap :: Contravariant f => (a -> b) -> f b -> f a
A type constructor which is a functor, but not Applicative:
I don't have a good example. There is Const, but ideally I'd like a concrete non-Monoid and I can't think of any. All types are basically numeric, enumerations, products, sums, or functions when you get down to it. You can see below pigworker and I disagreeing about whether Data.Void is a Monoid;
instance Monoid Data.Void where
mempty = undefined
mappend _ _ = undefined
mconcat _ = undefined
Since _|_ is a legal value in Haskell, and in fact the only legal value of Data.Void, this meets the Monoid rules. I am unsure what unsafeCoerce has to do with it, because your program is no longer guaranteed not to violate Haskell semantics as soon as you use any unsafe function.
See the Haskell Wiki for an article on bottom (link) or unsafe functions (link).
I wonder if it is possible to create such a type constructor using a richer type system, such as Agda or Haskell with various extensions.
A type constructor which is an Applicative, but not a Monad:
newtype T a = T {multidimensional array of a}
You can make an Applicative out of it, with something like:
mkarray [(+10), (+100), id] <*> mkarray [1, 2]
== mkarray [[11, 101, 1], [12, 102, 2]]
But if you make it a monad, you could get a dimension mismatch. I suspect that examples like this are rare in practice.
A type constructor which is a Monad:
[]
About Arrows:
Asking where an Arrow lies on this hierarchy is like asking what kind of shape "red" is. Note the kind mismatch:
Functor :: * -> *
Applicative :: * -> *
Monad :: * -> *
but,
Arrow :: * -> * -> *
My style may be cramped by my phone, but here goes.
newtype Not x = Kill {kill :: x -> Void}
cannot be a Functor. If it were, we'd have
kill (fmap (const ()) (Kill id)) () :: Void
and the Moon would be made of green cheese.
Meanwhile
newtype Dead x = Oops {oops :: Void}
is a functor
instance Functor Dead where
fmap f (Oops corpse) = Oops corpse
but cannot be applicative, or we'd have
oops (pure ()) :: Void
and Green would be made of Moon cheese (which can actually happen, but only later in the evening).
(Extra note: Void, as in Data.Void is an empty datatype. If you try to use undefined to prove it's a Monoid, I'll use unsafeCoerce to prove that it isn't.)
Joyously,
newtype Boo x = Boo {boo :: Bool}
is applicative in many ways, e.g., as Dijkstra would have it,
instance Applicative Boo where
pure _ = Boo True
Boo b1 <*> Boo b2 = Boo (b1 == b2)
but it cannot be a Monad. To see why not, observe that return must be constantly Boo True or Boo False, and hence that
join . return == id
cannot possibly hold.
Oh yeah, I nearly forgot
newtype Thud x = The {only :: ()}
is a Monad. Roll your own.
Plane to catch...
I believe the other answers missed some simple and common examples:
A type constructor which is a Functor but not an Applicative. A simple example is a pair:
instance Functor ((,) r) where
fmap f (x,y) = (x, f y)
But there is no way how to define its Applicative instance without imposing additional restrictions on r. In particular, there is no way how to define pure :: a -> (r, a) for an arbitrary r.
A type constructor which is an Applicative, but is not a Monad. A well-known example is ZipList. (It's a newtype that wraps lists and provides different Applicative instance for them.)
fmap is defined in the usual way. But pure and <*> are defined as
pure x = ZipList (repeat x)
ZipList fs <*> ZipList xs = ZipList (zipWith id fs xs)
so pure creates an infinite list by repeating the given value, and <*> zips a list of functions with a list of values - applies i-th function to i-th element. (The standard <*> on [] produces all possible combinations of applying i-th function to j-th element.) But there is no sensible way how to define a monad (see this post).
How arrows fit into the functor/applicative/monad hierarchy?
See Idioms are oblivious, arrows are meticulous, monads are promiscuous by Sam Lindley, Philip Wadler, Jeremy Yallop. MSFP 2008. (They call applicative functors idioms.) The abstract:
We revisit the connection between three notions of computation: Moggi's monads, Hughes's arrows and McBride and Paterson's idioms (also called applicative functors). We show that idioms are equivalent to arrows that satisfy the type isomorphism A ~> B = 1 ~> (A -> B) and that monads are equivalent to arrows that satisfy the type isomorphism A ~> B = A -> (1 ~> B). Further, idioms embed into arrows and arrows embed into monads.
A good example for a type constructor which is not a functor is Set: You can't implement fmap :: (a -> b) -> f a -> f b, because without an additional constraint Ord b you can't construct f b.
I'd like to propose a more systematic approach to answering this question, and also to show examples that do not use any special tricks like the "bottom" values or infinite data types or anything like that.
When do type constructors fail to have type class instances?
In general, there are two reasons why a type constructor could fail to have an instance of a certain type class:
Cannot implement the type signatures of the required methods from the type class.
Can implement the type signatures but cannot satisfy the required laws.
Examples of the first kind are easier than those of the second kind because for the first kind, we just need to check whether one can implement a function with a given type signature, while for the second kind, we are required to prove that no implementation could possibly satisfy the laws.
Specific examples
A type constructor that cannot have a functor instance because the type cannot be implemented:
data F z a = F (a -> z)
This is a contrafunctor, not a functor, with respect to the type parameter a, because a in a contravariant position. It is impossible to implement a function with type signature (a -> b) -> F z a -> F z b.
A type constructor that is not a lawful functor even though the type signature of fmap can be implemented:
data Q a = Q(a -> Int, a)
fmap :: (a -> b) -> Q a -> Q b
fmap f (Q(g, x)) = Q(\_ -> g x, f x) -- this fails the functor laws!
The curious aspect of this example is that we can implement fmap of the correct type even though F cannot possibly be a functor because it uses a in a contravariant position. So this implementation of fmap shown above is misleading - even though it has the correct type signature (I believe this is the only possible implementation of that type signature), the functor laws are not satisfied. For example, fmap id ≠ id, because let (Q(f,_)) = fmap id (Q(read,"123")) in f "456" is 123, but let (Q(f,_)) = id (Q(read,"123")) in f "456" is 456.
In fact, F is only a profunctor, - it is neither a functor nor a contrafunctor.
A lawful functor that is not applicative because the type signature of pure cannot be implemented: take the Writer monad (a, w) and remove the constraint that w should be a monoid. It is then impossible to construct a value of type (a, w) out of a.
A functor that is not applicative because the type signature of <*> cannot be implemented: data F a = Either (Int -> a) (String -> a).
A functor that is not lawful applicative even though the type class methods can be implemented:
data P a = P ((a -> Int) -> Maybe a)
The type constructor P is a functor because it uses a only in covariant positions.
instance Functor P where
fmap :: (a -> b) -> P a -> P b
fmap fab (P pa) = P (\q -> fmap fab $ pa (q . fab))
The only possible implementation of the type signature of <*> is a function that always returns Nothing:
(<*>) :: P (a -> b) -> P a -> P b
(P pfab) <*> (P pa) = \_ -> Nothing -- fails the laws!
But this implementation does not satisfy the identity law for applicative functors.
A functor that is Applicative but not a Monad because the type signature of bind cannot be implemented.
I do not know any such examples!
A functor that is Applicative but not a Monad because laws cannot be satisfied even though the type signature of bind can be implemented.
This example has generated quite a bit of discussion, so it is safe to say that proving this example correct is not easy. But several people have verified this independently by different methods. See Is `data PoE a = Empty | Pair a a` a monad? for additional discussion.
data B a = Maybe (a, a)
deriving Functor
instance Applicative B where
pure x = Just (x, x)
b1 <*> b2 = case (b1, b2) of
(Just (x1, y1), Just (x2, y2)) -> Just((x1, x2), (y1, y2))
_ -> Nothing
It is somewhat cumbersome to prove that there is no lawful Monad instance. The reason for the non-monadic behavior is that there is no natural way of implementing bind when a function f :: a -> B b could return Nothing or Just for different values of a.
It is perhaps clearer to consider Maybe (a, a, a), which is also not a monad, and to try implementing join for that. One will find that there is no intuitively reasonable way of implementing join.
join :: Maybe (Maybe (a, a, a), Maybe (a, a, a), Maybe (a, a, a)) -> Maybe (a, a, a)
join Nothing = Nothing
join Just (Nothing, Just (x1,x2,x3), Just (y1,y2,y3)) = ???
join Just (Just (x1,x2,x3), Nothing, Just (y1,y2,y3)) = ???
-- etc.
In the cases indicated by ???, it seems clear that we cannot produce Just (z1, z2, z3) in any reasonable and symmetric manner out of six different values of type a. We could certainly choose some arbitrary subset of these six values, -- for instance, always take the first nonempty Maybe - but this would not satisfy the laws of the monad. Returning Nothing will also not satisfy the laws.
A tree-like data structure that is not a monad even though it has associativity for bind - but fails the identity laws.
The usual tree-like monad (or "a tree with functor-shaped branches") is defined as
data Tr f a = Leaf a | Branch (f (Tr f a))
This is a free monad over the functor f. The shape of the data is a tree where each branch point is a "functor-ful" of subtrees. The standard binary tree would be obtained with type f a = (a, a).
If we modify this data structure by making also the leaves in the shape of the functor f, we obtain what I call a "semimonad" - it has bind that satisfies the naturality and the associativity laws, but its pure method fails one of the identity laws. "Semimonads are semigroups in the category of endofunctors, what's the problem?" This is the type class Bind.
For simplicity, I define the join method instead of bind:
data Trs f a = Leaf (f a) | Branch (f (Trs f a))
join :: Trs f (Trs f a) -> Trs f a
join (Leaf ftrs) = Branch ftrs
join (Branch ftrstrs) = Branch (fmap #f join ftrstrs)
The branch grafting is standard, but the leaf grafting is non-standard and produces a Branch. This is not a problem for the associativity law but breaks one of the identity laws.
When do polynomial types have monad instances?
Neither of the functors Maybe (a, a) and Maybe (a, a, a) can be given a lawful Monad instance, although they are obviously Applicative.
These functors have no tricks - no Void or bottom anywhere, no tricky laziness/strictness, no infinite structures, and no type class constraints. The Applicative instance is completely standard. The functions return and bind can be implemented for these functors but will not satisfy the laws of the monad. In other words, these functors are not monads because a specific structure is missing (but it is not easy to understand what exactly is missing). As an example, a small change in the functor can make it into a monad: data Maybe a = Nothing | Just a is a monad. Another similar functor data P12 a = Either a (a, a) is also a monad.
Constructions for polynomial monads
In general, here are some constructions that produce lawful Monads out of polynomial types. In all these constructions, M is a monad:
type M a = Either c (w, a) where w is any monoid
type M a = m (Either c (w, a)) where m is any monad and w is any monoid
type M a = (m1 a, m2 a) where m1 and m2 are any monads
type M a = Either a (m a) where m is any monad
The first construction is WriterT w (Either c), the second construction is WriterT w (EitherT c m). The third construction is a component-wise product of monads: pure #M is defined as the component-wise product of pure #m1 and pure #m2, and join #M is defined by omitting cross-product data (e.g. m1 (m1 a, m2 a) is mapped to m1 (m1 a) by omitting the second part of the tuple):
join :: (m1 (m1 a, m2 a), m2 (m1 a, m2 a)) -> (m1 a, m2 a)
join (m1x, m2x) = (join #m1 (fmap fst m1x), join #m2 (fmap snd m2x))
The fourth construction is defined as
data M m a = Either a (m a)
instance Monad m => Monad M m where
pure x = Left x
join :: Either (M m a) (m (M m a)) -> M m a
join (Left mma) = mma
join (Right me) = Right $ join #m $ fmap #m squash me where
squash :: M m a -> m a
squash (Left x) = pure #m x
squash (Right ma) = ma
I have checked that all four constructions produce lawful monads.
I conjecture that there are no other constructions for polynomial monads. For example, the functor Maybe (Either (a, a) (a, a, a, a)) is not obtained through any of these constructions and so is not monadic. However, Either (a, a) (a, a, a) is monadic because it is isomorphic to the product of three monads a, a, and Maybe a. Also, Either (a,a) (a,a,a,a) is monadic because it is isomorphic to the product of a and Either a (a, a, a).
The four constructions shown above will allow us to obtain any sum of any number of products of any number of a's, for example Either (Either (a, a) (a, a, a, a)) (a, a, a, a, a)) and so on. All such type constructors will have (at least one) Monad instance.
It remains to be seen, of course, what use cases might exist for such monads. Another issue is that the Monad instances derived via constructions 1-4 are in general not unique. For example, the type constructor type F a = Either a (a, a) can be given a Monad instance in two ways: by construction 4 using the monad (a, a), and by construction 3 using the type isomorphism Either a (a, a) = (a, Maybe a). Again, finding use cases for these implementations is not immediately obvious.
A question remains - given an arbitrary polynomial data type, how to recognize whether it has a Monad instance. I do not know how to prove that there are no other constructions for polynomial monads. I don't think any theory exists so far to answer this question.

What's a good example for a Haskell functor that is not an applicative functor? [duplicate]

While explaining to someone what a type class X is I struggle to find good examples of data structures which are exactly X.
So, I request examples for:
A type constructor which is not a Functor.
A type constructor which is a Functor, but not Applicative.
A type constructor which is an Applicative, but is not a Monad.
A type constructor which is a Monad.
I think there are plenty examples of Monad everywhere, but a good example of Monad with some relation to previous examples could complete the picture.
I look for examples which would be similar to each other, differing only in aspects important for belonging to the particular type class.
If one could manage to sneak up an example of Arrow somewhere in this hierarchy (is it between Applicative and Monad?), that would be great too!
A type constructor which is not a Functor:
newtype T a = T (a -> Int)
You can make a contravariant functor out of it, but not a (covariant) functor. Try writing fmap and you'll fail. Note that the contravariant functor version is reversed:
fmap :: Functor f => (a -> b) -> f a -> f b
contramap :: Contravariant f => (a -> b) -> f b -> f a
A type constructor which is a functor, but not Applicative:
I don't have a good example. There is Const, but ideally I'd like a concrete non-Monoid and I can't think of any. All types are basically numeric, enumerations, products, sums, or functions when you get down to it. You can see below pigworker and I disagreeing about whether Data.Void is a Monoid;
instance Monoid Data.Void where
mempty = undefined
mappend _ _ = undefined
mconcat _ = undefined
Since _|_ is a legal value in Haskell, and in fact the only legal value of Data.Void, this meets the Monoid rules. I am unsure what unsafeCoerce has to do with it, because your program is no longer guaranteed not to violate Haskell semantics as soon as you use any unsafe function.
See the Haskell Wiki for an article on bottom (link) or unsafe functions (link).
I wonder if it is possible to create such a type constructor using a richer type system, such as Agda or Haskell with various extensions.
A type constructor which is an Applicative, but not a Monad:
newtype T a = T {multidimensional array of a}
You can make an Applicative out of it, with something like:
mkarray [(+10), (+100), id] <*> mkarray [1, 2]
== mkarray [[11, 101, 1], [12, 102, 2]]
But if you make it a monad, you could get a dimension mismatch. I suspect that examples like this are rare in practice.
A type constructor which is a Monad:
[]
About Arrows:
Asking where an Arrow lies on this hierarchy is like asking what kind of shape "red" is. Note the kind mismatch:
Functor :: * -> *
Applicative :: * -> *
Monad :: * -> *
but,
Arrow :: * -> * -> *
My style may be cramped by my phone, but here goes.
newtype Not x = Kill {kill :: x -> Void}
cannot be a Functor. If it were, we'd have
kill (fmap (const ()) (Kill id)) () :: Void
and the Moon would be made of green cheese.
Meanwhile
newtype Dead x = Oops {oops :: Void}
is a functor
instance Functor Dead where
fmap f (Oops corpse) = Oops corpse
but cannot be applicative, or we'd have
oops (pure ()) :: Void
and Green would be made of Moon cheese (which can actually happen, but only later in the evening).
(Extra note: Void, as in Data.Void is an empty datatype. If you try to use undefined to prove it's a Monoid, I'll use unsafeCoerce to prove that it isn't.)
Joyously,
newtype Boo x = Boo {boo :: Bool}
is applicative in many ways, e.g., as Dijkstra would have it,
instance Applicative Boo where
pure _ = Boo True
Boo b1 <*> Boo b2 = Boo (b1 == b2)
but it cannot be a Monad. To see why not, observe that return must be constantly Boo True or Boo False, and hence that
join . return == id
cannot possibly hold.
Oh yeah, I nearly forgot
newtype Thud x = The {only :: ()}
is a Monad. Roll your own.
Plane to catch...
I believe the other answers missed some simple and common examples:
A type constructor which is a Functor but not an Applicative. A simple example is a pair:
instance Functor ((,) r) where
fmap f (x,y) = (x, f y)
But there is no way how to define its Applicative instance without imposing additional restrictions on r. In particular, there is no way how to define pure :: a -> (r, a) for an arbitrary r.
A type constructor which is an Applicative, but is not a Monad. A well-known example is ZipList. (It's a newtype that wraps lists and provides different Applicative instance for them.)
fmap is defined in the usual way. But pure and <*> are defined as
pure x = ZipList (repeat x)
ZipList fs <*> ZipList xs = ZipList (zipWith id fs xs)
so pure creates an infinite list by repeating the given value, and <*> zips a list of functions with a list of values - applies i-th function to i-th element. (The standard <*> on [] produces all possible combinations of applying i-th function to j-th element.) But there is no sensible way how to define a monad (see this post).
How arrows fit into the functor/applicative/monad hierarchy?
See Idioms are oblivious, arrows are meticulous, monads are promiscuous by Sam Lindley, Philip Wadler, Jeremy Yallop. MSFP 2008. (They call applicative functors idioms.) The abstract:
We revisit the connection between three notions of computation: Moggi's monads, Hughes's arrows and McBride and Paterson's idioms (also called applicative functors). We show that idioms are equivalent to arrows that satisfy the type isomorphism A ~> B = 1 ~> (A -> B) and that monads are equivalent to arrows that satisfy the type isomorphism A ~> B = A -> (1 ~> B). Further, idioms embed into arrows and arrows embed into monads.
A good example for a type constructor which is not a functor is Set: You can't implement fmap :: (a -> b) -> f a -> f b, because without an additional constraint Ord b you can't construct f b.
I'd like to propose a more systematic approach to answering this question, and also to show examples that do not use any special tricks like the "bottom" values or infinite data types or anything like that.
When do type constructors fail to have type class instances?
In general, there are two reasons why a type constructor could fail to have an instance of a certain type class:
Cannot implement the type signatures of the required methods from the type class.
Can implement the type signatures but cannot satisfy the required laws.
Examples of the first kind are easier than those of the second kind because for the first kind, we just need to check whether one can implement a function with a given type signature, while for the second kind, we are required to prove that no implementation could possibly satisfy the laws.
Specific examples
A type constructor that cannot have a functor instance because the type cannot be implemented:
data F z a = F (a -> z)
This is a contrafunctor, not a functor, with respect to the type parameter a, because a in a contravariant position. It is impossible to implement a function with type signature (a -> b) -> F z a -> F z b.
A type constructor that is not a lawful functor even though the type signature of fmap can be implemented:
data Q a = Q(a -> Int, a)
fmap :: (a -> b) -> Q a -> Q b
fmap f (Q(g, x)) = Q(\_ -> g x, f x) -- this fails the functor laws!
The curious aspect of this example is that we can implement fmap of the correct type even though F cannot possibly be a functor because it uses a in a contravariant position. So this implementation of fmap shown above is misleading - even though it has the correct type signature (I believe this is the only possible implementation of that type signature), the functor laws are not satisfied. For example, fmap id ≠ id, because let (Q(f,_)) = fmap id (Q(read,"123")) in f "456" is 123, but let (Q(f,_)) = id (Q(read,"123")) in f "456" is 456.
In fact, F is only a profunctor, - it is neither a functor nor a contrafunctor.
A lawful functor that is not applicative because the type signature of pure cannot be implemented: take the Writer monad (a, w) and remove the constraint that w should be a monoid. It is then impossible to construct a value of type (a, w) out of a.
A functor that is not applicative because the type signature of <*> cannot be implemented: data F a = Either (Int -> a) (String -> a).
A functor that is not lawful applicative even though the type class methods can be implemented:
data P a = P ((a -> Int) -> Maybe a)
The type constructor P is a functor because it uses a only in covariant positions.
instance Functor P where
fmap :: (a -> b) -> P a -> P b
fmap fab (P pa) = P (\q -> fmap fab $ pa (q . fab))
The only possible implementation of the type signature of <*> is a function that always returns Nothing:
(<*>) :: P (a -> b) -> P a -> P b
(P pfab) <*> (P pa) = \_ -> Nothing -- fails the laws!
But this implementation does not satisfy the identity law for applicative functors.
A functor that is Applicative but not a Monad because the type signature of bind cannot be implemented.
I do not know any such examples!
A functor that is Applicative but not a Monad because laws cannot be satisfied even though the type signature of bind can be implemented.
This example has generated quite a bit of discussion, so it is safe to say that proving this example correct is not easy. But several people have verified this independently by different methods. See Is `data PoE a = Empty | Pair a a` a monad? for additional discussion.
data B a = Maybe (a, a)
deriving Functor
instance Applicative B where
pure x = Just (x, x)
b1 <*> b2 = case (b1, b2) of
(Just (x1, y1), Just (x2, y2)) -> Just((x1, x2), (y1, y2))
_ -> Nothing
It is somewhat cumbersome to prove that there is no lawful Monad instance. The reason for the non-monadic behavior is that there is no natural way of implementing bind when a function f :: a -> B b could return Nothing or Just for different values of a.
It is perhaps clearer to consider Maybe (a, a, a), which is also not a monad, and to try implementing join for that. One will find that there is no intuitively reasonable way of implementing join.
join :: Maybe (Maybe (a, a, a), Maybe (a, a, a), Maybe (a, a, a)) -> Maybe (a, a, a)
join Nothing = Nothing
join Just (Nothing, Just (x1,x2,x3), Just (y1,y2,y3)) = ???
join Just (Just (x1,x2,x3), Nothing, Just (y1,y2,y3)) = ???
-- etc.
In the cases indicated by ???, it seems clear that we cannot produce Just (z1, z2, z3) in any reasonable and symmetric manner out of six different values of type a. We could certainly choose some arbitrary subset of these six values, -- for instance, always take the first nonempty Maybe - but this would not satisfy the laws of the monad. Returning Nothing will also not satisfy the laws.
A tree-like data structure that is not a monad even though it has associativity for bind - but fails the identity laws.
The usual tree-like monad (or "a tree with functor-shaped branches") is defined as
data Tr f a = Leaf a | Branch (f (Tr f a))
This is a free monad over the functor f. The shape of the data is a tree where each branch point is a "functor-ful" of subtrees. The standard binary tree would be obtained with type f a = (a, a).
If we modify this data structure by making also the leaves in the shape of the functor f, we obtain what I call a "semimonad" - it has bind that satisfies the naturality and the associativity laws, but its pure method fails one of the identity laws. "Semimonads are semigroups in the category of endofunctors, what's the problem?" This is the type class Bind.
For simplicity, I define the join method instead of bind:
data Trs f a = Leaf (f a) | Branch (f (Trs f a))
join :: Trs f (Trs f a) -> Trs f a
join (Leaf ftrs) = Branch ftrs
join (Branch ftrstrs) = Branch (fmap #f join ftrstrs)
The branch grafting is standard, but the leaf grafting is non-standard and produces a Branch. This is not a problem for the associativity law but breaks one of the identity laws.
When do polynomial types have monad instances?
Neither of the functors Maybe (a, a) and Maybe (a, a, a) can be given a lawful Monad instance, although they are obviously Applicative.
These functors have no tricks - no Void or bottom anywhere, no tricky laziness/strictness, no infinite structures, and no type class constraints. The Applicative instance is completely standard. The functions return and bind can be implemented for these functors but will not satisfy the laws of the monad. In other words, these functors are not monads because a specific structure is missing (but it is not easy to understand what exactly is missing). As an example, a small change in the functor can make it into a monad: data Maybe a = Nothing | Just a is a monad. Another similar functor data P12 a = Either a (a, a) is also a monad.
Constructions for polynomial monads
In general, here are some constructions that produce lawful Monads out of polynomial types. In all these constructions, M is a monad:
type M a = Either c (w, a) where w is any monoid
type M a = m (Either c (w, a)) where m is any monad and w is any monoid
type M a = (m1 a, m2 a) where m1 and m2 are any monads
type M a = Either a (m a) where m is any monad
The first construction is WriterT w (Either c), the second construction is WriterT w (EitherT c m). The third construction is a component-wise product of monads: pure #M is defined as the component-wise product of pure #m1 and pure #m2, and join #M is defined by omitting cross-product data (e.g. m1 (m1 a, m2 a) is mapped to m1 (m1 a) by omitting the second part of the tuple):
join :: (m1 (m1 a, m2 a), m2 (m1 a, m2 a)) -> (m1 a, m2 a)
join (m1x, m2x) = (join #m1 (fmap fst m1x), join #m2 (fmap snd m2x))
The fourth construction is defined as
data M m a = Either a (m a)
instance Monad m => Monad M m where
pure x = Left x
join :: Either (M m a) (m (M m a)) -> M m a
join (Left mma) = mma
join (Right me) = Right $ join #m $ fmap #m squash me where
squash :: M m a -> m a
squash (Left x) = pure #m x
squash (Right ma) = ma
I have checked that all four constructions produce lawful monads.
I conjecture that there are no other constructions for polynomial monads. For example, the functor Maybe (Either (a, a) (a, a, a, a)) is not obtained through any of these constructions and so is not monadic. However, Either (a, a) (a, a, a) is monadic because it is isomorphic to the product of three monads a, a, and Maybe a. Also, Either (a,a) (a,a,a,a) is monadic because it is isomorphic to the product of a and Either a (a, a, a).
The four constructions shown above will allow us to obtain any sum of any number of products of any number of a's, for example Either (Either (a, a) (a, a, a, a)) (a, a, a, a, a)) and so on. All such type constructors will have (at least one) Monad instance.
It remains to be seen, of course, what use cases might exist for such monads. Another issue is that the Monad instances derived via constructions 1-4 are in general not unique. For example, the type constructor type F a = Either a (a, a) can be given a Monad instance in two ways: by construction 4 using the monad (a, a), and by construction 3 using the type isomorphism Either a (a, a) = (a, Maybe a). Again, finding use cases for these implementations is not immediately obvious.
A question remains - given an arbitrary polynomial data type, how to recognize whether it has a Monad instance. I do not know how to prove that there are no other constructions for polynomial monads. I don't think any theory exists so far to answer this question.

Are there a thing call "semi-monad" or "counter-monad"?

Well, I am studying Haskell Monads. When I read the Wikibook Category theory article, I found that the signature of monad morphisms looks pretty like tautologies in logic, but you need to convert M a to ~~A, here ~ is the logic negation.
return :: a -> M a -- Map to tautology A => ~~A, double negation introduction
(>>=) :: M a -> (a -> M b) -> M b -- Map to tautology ~~A => (A => ~~B) => ~~B
the other operations is also tautologies:
fmap :: (a -> b) -> M a -> M b -- Map to (A => B) -> (~~A => ~~B)
join :: M (M a) -> M a -- Map to ~~(~~A) => ~~A
It's also understood that according to the fact that the Curry-Howard correspondence of normal functional languages is the intuitive logic, not classical logic, so we cannot expect a tautology like ~~A => A can have a correspondence.
But I am thinking of something else. Why the Monad can only relate to a double negation? what is the correspondence of single negation? This lead me to the following class definition:
class Nomad n where
rfmap :: (a -> b) -> n b -> n a
dneg :: a -> n (n a)
return :: Nomad n => a -> n (n a)
return = dneg
(>>=) :: Nomad n => n (n a) -> (a -> n (n b)) -> n (n b)
x >>= f = rfmap dneg $ rfmap (rfmap f) x
Here I defined a concept called "Nomad", and it supports two operations (both related to a logic axiom in intuitive logic). Notice the name "rfmap" means the fact that its signature is similar to functor's fmap, but the order of a and b are reversed in result. Now I can re-define the Monad operations with them, with replace M a to n (n a).
So now let's go to the question part. The fact that Monad is a concept from category theory seems to mean that my "Nomad" is also a category theory concept. So what is it? Is it useful? Are there any papers or research results exists in this topic?
Double negation is a particular monad
data Void --no constructor because it is uninhabited
newtype DN a = DN {runDN :: (a -> Void) -> Void}
instance Monad DN where
return x = DN $ \f -> f x
m >>= k = DN $ \c -> runDN m (\a -> runDN (k a) c))
actually, this is an example of a more general monad
type DN = Cont Void
newtype Cont r a = Cont {runCont :: (a -> r) -> r}
which is the monad for continuation passing.
A concept like "monad" is not defined just by a signature, but also by some laws. So, here is a question, what should the laws for your construction be?
(a -> b) -> f b -> f a
is the signature of a method well known to category theory, the contravariant functor. It obeys essentially the same laws as functors (preserves (co)composition and identity). Actually, a contravariant functor is exactly a functor to the oppositite category. Since we are interested in "haskell functors" which are supposed to be endo-functors, we can see that a "haskell contravariant functor" is a functor Hask -> Hask_Op.
On the other hand, what about
a -> f (f a)
what laws should this have? I have a suggestion. In category theory, it is possible to map between Functors. Given two functors from F, G each from category C to category D, a natural transformation from F to G is a morphism in D
forall a. F a -> G a
that obeys certain coherence laws. You can do a lot with natural transformations, including using them to define a "functor category." But, the classic joke (due to Mac Lane) is that categories were invented to talk about functors, functors were invented to talk about natural transformations, and natural transformations were invented to talk about adjunctions.
A functor F : D -> C and a functor G : C -> D form an adjunction from C to D if there exist two natural transformations
unit : Id -> G . F
counit : F . G -> Id
This idea of an adjunction is a frequent way of understanding monads. Every adjunction gives rise to a monad in a totally natural way. That is, since the composition of these two functors is an endofunctor, and you have something akin to return (the unit), all you need is the join. But that is easy, join is just a function G . F . G . F -> G . F which you get by just using the counit "in the middle".
So, then with all this, what is it that you are looking for? Well
dneg :: a -> n (n a)
looks exactly like the unit of the adjunction of a contravariant functor with itself. Therefore, your Nomad type is likely (certainly if your use of it to construct a monad is correct) exactly the same as "a contravariant functor which is self adjoint." Searching for self adjoint functors will lead you back to Double-negation and Continuation passing...which is just where we started!
EDIT: Almost certainly some of the arrows above are backwards. The basic idea is right, though. You can work it out yourself using the citations below:
The best books on category theory are probably,
Steve Awodey, Category Theory
Sanders Mac Lane, Categories for the Working Mathematician
although numerous more approachable intro books exist including Benjamin Pierces book on Category Theory for computer scientists.
Video lectures online
The Category Theory lectures by Steve Awodey from the Oregon Programming Language Summer School
The Catsters short lectures on youtube, indexed here, pay particular attention to the videos on adjunctions
A number of papers explore the adjunction angle on the continuation monad, for example
Hayo Thielecke, Continuation Semantics and Self-adjointness
The search terms "self adjoint", "continuation", and "monad" are good. Also, a number of bloggers have written about these issues. If you google "where do monads come from" you get useful results like this one from the n category cafe, or this one from sigfpe. Also Sjoerd Vissche's link to the Comonad reader.
That would be a self-adjoint contravariant functor. rfmap provides the contravariant functor part, and dneg is the unit and counit of the adjunction.
Op r is an example, which creates the continuation monad. See the contravariant modules in http://hackage.haskell.org/package/adjunctions for some code.
You should read http://comonad.com/reader/2011/monads-from-comonads/, which is related and very interesting.

Resources