While I was learning Composing Types chapter from Haskell Book, I was given tasks to write Functor and Applicative instances for the following type.
newtype Compose f g a = Compose { getCompose :: f (g a) }
I wrote the following definitions
Functor:
fmap f (Compose fga) = Compose $ (fmap . fmap) f fga
Applicative:
(Compose f) <*> (Compose a) = Compose $ (<*>) <$> f <*> a
I learned that composing two Functors or Applicatives gives Functor and Applicative respectively.
The author also explained it is not possible to compose two Monads the same way. So we use Monad Transformers. I just do not want to read Monad Transformers unless I'm clear with why Monads do not compose.
So far I tried to write bind function like this:
Monad:
(>>=) :: Compose f g a -> (a -> Compose f g b) -> Compose f g b
(Compose fga) >>= h = (fmap.fmap) h fga
and of course got this error from GHC
Expected type: Compose f g b
Actual type: f (g (Compose f g b))
If I can strip the outermost f g somehow, the composition gives us a monad right? (I still couldn't figure out how to strip that though)
I tried reading answers from other Stack Overflow questions like this, but all answers are more theoretical or some Math. I still haven't learned why Monads do not compose. Can somebody explain me without using Math?
I think this is easiest to understand by looking at the join operator:
join :: Monad m => m (m a) -> m a
join is an alternative to >>= for defining a Monad, and is a little easier to reason about. (But now you have an exercise to do: show how to implement >>= from join, and how to implement join from >>=!)
Let's try to make a join operation for Composed f g and see what goes wrong. Our input is essentially a value of type f (g (f (g a))), and we want to produce a value of type f (g a). We also know that we have join for f and g individually, so if we could get a value of type f (f (g (g a))), then we could hit it with fmap join . join to get the f (g a) we wanted.
Now, f (f (g (g a))) isn't so far from f (g (f (g a))). All we really need is a function like this: distribute :: g (f a) -> f (g a). Then we could implement join like this:
join = Compose . fmap join . join . fmap (distribute . fmap getCompose) . getCompose
Note: there are some laws that we would want distribute to satisfy, in order to make sure that the join we get here is lawful.
Ok, so that shows how we can compose two monads if we have a distributive law distribute :: (Monad f, Monad g) => g (f a) -> f (g a). Now, it could be true that every pair of monads has a distributive law. Maybe we just have to think really hard about how to write one down?
Unfortunately there are pairs of monads that don't have a distributive law. So we can answer your original question by producing two monads that definitely don't have a way of turning a g (f a) into an f (g a). These two monads will witness to the fact that monads don't compose in general.
I claim that g = IO and f = Maybe do not have a distributive law
-- Impossible!
distribute :: IO (Maybe a) -> Maybe (IO a)
Let's think about why such a thing should be impossible. The input to this function is an IO action that goes out into the real world and eventually produces Nothing or a Just x. The output of this function is either Nothing, or Just an IO action that, when run, eventually produces x. To produce the Maybe (IO a), we would have to peek into the future and predict what the IO (Maybe a) action is going to do!
In summary:
Monads can compose if there is a distributive law g (f a) -> f (g a). (but see the addendum below)
There are some monads that don't have such a distributive law.
Some monads can compose with each other, but not every pair of monads can compose.
Addendum: "if", but what about "only if"? If all three of F, G, and FG are monads, then you can construct a natural transformation δ : ∀X. GFX -> FGX as the composition of GFη_X : GFX -> GFGX followed by η_{GFGX} : GFGX -> FGFGX and then by μ_X : FGFGX -> FGX. In Haskellese (with explicit type applications for clarity), that would be
delta :: forall f g x. (Monad f, Monad g, Monad (Compose f g))
=> g (f x) -> f (g x)
delta = join' . pure #f . fmap #g (fmap #f (pure #g))
where
-- join for (f . g), via the `Monad (Compose f g)` instance
join' :: f (g (f (g x))) -> f (g x)
join' = getCompose . join #(Compose f g) . fmap Compose . Compose
So if the composition FG is a monad, then you can get a natural transformation with the right shape to be a distributive law. However, there are some extra constraints that fall out of making sure your distributive law satisfies the correct properties, vaguely alluded to above. As always, the n-Category Cafe has the gory details.
Related
Consider the following signature of foldMap
foldMap :: (Foldable t, Monoid m) => (a -> m) -> t a -> m
This is very similar to "bind", just with the arguments swapped:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
It seems to me that there therefore must be some sort of relationship between Foldable, Monoid and Monad, but I can't find it in the superclasses. Presumably I can transform one or two of these into the other but I'm not sure how.
Could that relationship be detailed?
Monoid and Monad
Wow, this is actually one of the rare times we can use the quote:
A monad is just a monoid in the category of endofunctors, [...]
Let's start with a monoid. A monoid in the category Set of sets is a set of elements m with an empty element mempty and an associative function mappend to combine elements such that
mempty `mappend` x == x -- for any x
x `mappend` mempty == x -- for any x
-- and
a `mappend` (b `mappend` c) == (a `mappend` b) `mappend` c -- for all a, b, c
Note that a monoid is not limited to sets, there also exist monoids in the category Cat of categories (monads) and so on. Basically anytime you have an associative binary operation and an identity for it.
Now a monad, which is a "monoid in the category of endofunctors" has following properties:
It's an endofunctor, that means it has type * -> * in the Category Hask of Haskell types.
Now, to go further you must know a little bit of category theory I will try to explain here: Given two functors F and G, there exists a natural transformation from F to G iff there exists a function α such that every F a can be mapped to a G a. α can be many-to-one, but it has to map every element of F a. Roughly said, a natural transformation is a function between functors.
Now in category theory, there can be many functors between two categories. Ina simplified view it can be said that we don't even care about which functors map from where to where, we only care about the natural transformations between them.
Coming back to monad, we can now see that a "monoid in the category of endofunctors" must posess two natural transformations. Let's call our monad endofunctor M:
A natural transformation from the identity (endo)functor to the monad:
η :: 1 -> M -- this is return
And a natural transformation from the conposition of two monads and produce a third one:
μ :: M × M -> M
Since × is the composition of functors, we can (roughly speaking) also write:
μ :: m a × m a -> m a
μ :: (m × m) a -> m a
μ :: m (m a) -> m a -- join in Haskell
Satisfying these laws:
μ . M μ == μ . μ M
μ . M η == μ . η M
So, a monad is a special case of a monoid in the category of endofunctors. You can't write a monoid instance for monad in normal Haskell, since Haskell's notion of composition is too weak (I think; This is because functions are restricted to Hask and it's weaker than Cat). See this for more information.
What about Foldable?
Now as for Foldable: there exist definitions of folds using a custom binary function to combine the elements. Now you could of course supply any function to combine elements, or you could use an existing concept of combining elements, the monoid. Again, please note that this monoid restricted to the set monoid, not the catorical definition of monoid.
Since the monoid's mappend is associative, foldl and foldr yield the same result, which is why the folding of monoids can be reduced to fold :: Monoid m, Foldable t => t m -> m. This is an obvious connection between monoid and foldable.
#danidiaz already pointed out the connection between Applicative, Monoid and Foldable using the Const functor Const a b = Const a, whose applicative instance requires the first parameter of Const to be a monoid (no pure without mempty (disregarding undefined)).
Comparing monad and foldable is a bit of a stretch in my opinion, since monad is more powerful than foldable in the sense that foldable can only accumulate a list's values according to a mapping function, but the monad bind can structurally alter the context (a -> m b).
Summary: (>>=) and traverse look similar because they both are arrow mappings of functors, while foldMap is (almost) a specialised traverse.
Before we begin, there is one bit of terminology to explain. Consider fmap:
fmap :: Functor f => (a -> b) -> (f a -> f b)
A Haskell Functor is a functor from the Hask category (the category with Haskell functions as arrows) to itself. In category theory terms, we say that the (specialised) fmap is the arrow mapping of this functor, as it is the part of the functor that takes arrows to arrows. (For the sake of completeness: a functor consists of an arrow mapping plus an object mapping. In this case, the objects are Haskell types, and so the object mapping takes types to types -- more specifically, the object mapping of a Functor is its type constructor.)
We will also want to keep in mind the category and functor laws:
-- Category laws for Hask:
f . id = id
id . f = f
h . (g . f) = (h . g) . f
-- Functor laws for a Haskell Functor:
fmap id = id
fmap (g . f) = fmap g . fmap f
In what follows, we will work with categories other than Hask, and functors which are not Functors. In such cases, we will replace id and (.) by the appropriate identity and composition, fmap by the appropriate arrow mapping and, in one case, = by an appropriate equality of arrows.
(=<<)
To begin with the more familiar part of the answer, for a given monad m the a -> m b functions (also known as Kleisli arrows) form a category (the Kleisli category of m), with return as identity and (<=<) as composition. The three category laws, in this case, are just the monad laws:
f <=< return = f
return <=< f = f
h <=< (g <=< f) = (h <=< g) <=< f
Now, your asked about flipped bind:
(=<<) :: Monad m => (a -> m b) -> (m a -> m b)
It turns out that (=<<) is the arrow mapping of a functor from the Kleisli category of m to Hask. The functor laws applied to (=<<) amount to two of the monad laws:
return =<< x = x -- right unit
(g <=< f) =<< x = g =<< (f =<< x) -- associativity
traverse
Next, we need a detour through Traversable (a sketch of a proof of the results in this section is provided at the end of the answer). First, we note that the a -> f b functions for all applicative functors f taken at once (as opposed to one at each time, as when specifying a Kleisli category) form a category, with Identity as identity and Compose . fmap g . f as composition. For that to work, we also have to adopt a more relaxed equality of arrows, which ignores the Identity and Compose boilerplate (which is only necessary because I am writing this in pseudo-Haskell, as opposed to proper mathematical notation). More precisely, we will consider that that any two functions that can be interconverted using any composition of the Identity and Compose isomorphisms as equal arrows (or, in other words, we will not distinguish between a and Identity a, nor between f (g a) and Compose f g a).
Let's call that category the "traversable category" (as I cannot think of a better name right now). In concrete Haskell terms, an arrow in this category is a function which adds an extra layer of Applicative context "below" any previous existing layers. Now, consider traverse:
traverse :: (Traversable t, Applicative f) => (a -> f b) -> (t a -> f (t b))
Given a choice of traversable container, traverse is the arrow mapping of a functor from the "traversable category" to itself. The functor laws for it amount to the traversable laws.
In short, both (=<<) and traverse are analogues of fmap for functors involving categories other than Hask, and so it is not surprising that their types are a bit similar to each other.
foldMap
We still have to explain what all of that has to do with foldMap. The answer is that foldMap can be recovered from traverse (cf. danidiaz's answer -- it uses traverse_, but as the applicative functor is Const m the result is essentially the same):
-- cf. Data.Traversable
foldMapDefault :: (Traversable t, Monoid m) => (a -> m) -> (t a -> m)
foldMapDefault f = getConst . traverse (Const . f)
Thanks to the const/getConst isomorphism, this is clearly equivalent to:
foldMapDefault' :: (Traversable t, Monoid m)
=> (a -> Const m b) -> (t a -> Const m (t b))
foldMapDefault' f = traverse f
Which is just traverse specialised to the Monoid m => Const m applicative functors. Even though Traversable is not Foldable and foldMapDefault is not foldMap, this provides a decent justification for why the type of foldMap resembles that of traverse and, transitively, that of (=<<).
As a final observation, note that the arrows of the "traversable category" with applicative functor Const m for some Monoid m do not form a subcategory, as there is no identity unless Identity is among the possible choices of applicative functor. That probably means there is nothing else of interest to say about foldMap from the perspective of this answer. The only single choice of applicative functor that gives a subcategory is Identity, which is not at all surprising, given how a traversal with Identity amounts to fmap on the container.
Appendix
Here is a rough sketch of the derivation of the traverse result, yanked from my notes from several months ago with minimal editing. ~ means "equal up to (some relevant) isomorphism".
-- Identity and composition for the "traversable category".
idT = Identity
g .*. f = Compose . fmap g . f
-- Category laws: right identity
f .*. idT ~ f
f .*. idT
Compose . fmap f . idT
Compose . fmap f . Identity
Compose . Identity . f
f -- using getIdentity . getCompose
-- Category laws: left identity
idT .*. f ~ f
idT .*. f
Compose . fmap Identity . f
f -- using fmap getIdentity . getCompose
-- Category laws: associativity
h .*. (g .*. f) ~ (h .*. g) .*. f
h .*. (g .*. f) -- LHS
h .*. (Compose . fmap g . f)
Compose . fmap h . (Compose . fmap g . f)
Compose . Compose . fmap (fmap h) . fmap g . f
(h .*. g) .*. f -- RHS
(Compose . fmap h . g) .*. f
Compose . fmap (Compose . fmap h . g) . f
Compose . fmap (Compose . fmap h) . fmap g . f
Compose . fmap Compose . fmap (fmap h) . fmap g . f
-- using Compose . Compose . fmap getCompose . getCompose
Compose . Compose . fmap (fmap h) . fmap g . f -- RHS ~ LHS
-- Functor laws for traverse: identity
traverse idT ~ idT
traverse Identity ~ Identity -- i.e. the identity law of Traversable
-- Functor laws for traverse: composition
traverse (g .*. f) ~ traverse g .*. traverse f
traverse (Compose . fmap g . f) ~ Compose . fmap (traverse g) . traverse f
-- i.e. the composition law of Traversable
When a container is Foldable, there is a relationship between foldMap and Applicative (which is a superclass of Monad).
Foldable has a function called traverse_, with signature:
traverse_ :: Applicative f => (a -> f b) -> t a -> f ()
One possible Applicative is Constant. To be an Applicative, it requires the "accumulator" parameter to be a Monoid:
newtype Constant a b = Constant { getConstant :: a } -- no b value at the term level!
Monoid a => Applicative (Constant a)
for example:
gchi> Constant (Sum 1) <*> Constant (Sum 2) :: Constant (Sum Int) whatever
Constant (Sum {getSum = 3})
We can define foldMap in terms of traverse_ and Constant this way:
foldMap' :: (Monoid m, Foldable t) => (a -> m) -> t a -> m
foldMap' f = getConstant . traverse_ (Constant . f)
We use traverse_ to go through the container, accumulating values with Constant, and then we use getConstant to get rid of the newtype.
There is a lot of talk about Applicative not needing its own transformer class, like this:
class AppTrans t where
liftA :: Applicative f => f a -> t f a
But I can define applicative transformers that don't seem to be compositions of applicatives! For example sideeffectful streams:
data MStream f a = MStream (f (a, MStream f a))
Lifting just performs the side effect at every step:
instance AppTrans MStream where
liftA action = MStream $ (,) <$> action <*> pure (liftA action)
And if f is an applicative, then MStream f is as well:
instance Functor f => Functor (MStream f) where
fmap fun (MStream stream) = MStream $ (\(a, as) -> (fun a, fmap fun as)) <$> stream
instance Applicative f => Applicative (MStream f) where
pure = liftA . pure
MStream fstream <*> MStream astream = MStream
$ (\(f, fs) (a, as) -> (f a, fs <*> as)) <$> fstream <*> astream
I know that for any practical purposes, f should be a monad:
joinS :: Monad m => MStream m a -> m [a]
joinS (MStream stream) = do
(a, as) <- stream
aslist <- joinS as
return $ a : aslist
But while there is a Monad instance for MStream m, it's inefficient. (Or even incorrect?) The Applicative instance is actually useful!
Now note that usual streams arise as special cases for the identity functor:
import Data.Functor.Identity
type Stream a = MStream Identity a
But the composition of Stream and f is not MStream f! Rather, Compose Stream f a is isomorphic to Stream (f a).
I'd like to know whether MStream is a composition of any two applicatives.
Edit:
I'd like to offer a category theoretic viewpoint. A transformer is a "nice" endofunctor t on the category C of applicative functors (i.e. lax monoidal functors with strength), together with a natural transformation liftA from the identity on C to t. The more general question is now what useful transformers exist that are not of the form "compose with g" (where g is an applicative). My claim is that MStream is one of them.
Great question! I believe there are two different parts of this question:
Composing existing applicatives or monads into more complex ones.
Constructing all applicatives/monads from some given starting set.
Ad 1.: Monad transformers are essential for combining monads. Monads don't compose directly. It seems that there needs to be an extra bit of information provided by monad transformers that tells how each monad can be composed with other monads (but it could be this information is already somehow present, see Is there a monad that doesn't have a corresponding monad transformer?).
On the other hand, applicatives compose directly, see Data.Functor.Compose. This is why don't need applicative transformers for composition. They're also closed under product (but not coproduct).
For example, having infinite streams data Stream a = Cons a (Stream a) and another applicative g, both Stream (g a) and g (Stream a) are applicatives.
But even though Stream is also a monad (join takes the diagonal of a 2-dimensional stream), its composition with another monad m won't be, neither Stream (m a) nor m (Stream a) will always be a monad.
Furthermore as we can see, they're both different from your MStream g (which is very close to ListT done right), therefore:
Ad 2.: Can all applicatives be constructed from some given set of primitives? Apparently not. One problem is constructing sum data types: If f and g are applicatives, Either (f a) (g a) won't be, as we don't know how to compose Right h <*> Left x.
Another construction primitive is taking a fixed point, as in your MStream example. Here we might attempt to generalize the construction by defining something like
newtype Fix1 f a = Fix1 { unFix1 :: f (Fix1 f) a }
instance (Functor (f (Fix1 f))) => Functor (Fix1 f) where
fmap f (Fix1 a) = Fix1 (fmap f a)
instance (Applicative (f (Fix1 f))) => Applicative (Fix1 f) where
pure k = Fix1 (pure k)
(Fix1 f) <*> (Fix1 x) = Fix1 (f <*> x)
(which requires not-so-nice UndecidableInstances) and then
data MStream' f g a = MStream (f (a, g a))
type MStream f = Fix1 (MStream' f)
Exercise 5 of the Haskell Typeclassopedia Section 3.2 asks for a proof or counterexample on the statement
The composition of two Functors is also a Functor.
I thought at first that this was talking about composing the fmap methods defined by two separate instances of a Functor, but that doesn't really make sense, since the types wouldn't match up as far as I can tell. For two types f and f', the types of fmap would be fmap :: (a -> b) -> f a -> f b and fmap :: (a -> b) -> f' a -> f' b, and that doesn't really seem composable. So what does it mean to compose two Functors?
A Functor gives two mappings: one on the type level mapping types to types (this is the x in instance Functor x where), and one on the computation level mapping functions to functions (this is the x in fmap = x). You are thinking about composing the computation-level mapping, but should be thinking about composing the type-level mapping; e.g., given
newtype Compose f g x = Compose (f (g x))
can you write
instance (Functor f, Functor g) => Functor (Compose f g)
? If not, why not?
What this is talking about is the composition of type constructors like [] and Maybe, not the composition of functions like fmap. So for example, there are two ways of composing [] and Maybe:
newtype ListOfMabye a = ListOfMaybe [Maybe a]
newtype MaybeOfList a = MaybeOfList (Maybe [a])
The statement that the composition of two Functors is a Functor means that there is a formulaic way of writing a Functor instance for these types:
instance Functor ListOfMaybe where
fmap f (ListOfMaybe x) = ListOfMaybe (fmap (fmap f) x)
instance Functor MaybeOfList where
fmap f (MaybeOfList x) = MaybeOfList (fmap (fmap f) x)
In fact, the Haskell Platform comes with the module Data.Functor.Compose that gives you a Compose type that does this "for free":
import Data.Functor.Compose
newtype Compose f g a = Compose { getCompose :: f (g a) }
instance (Functor f, Functor g) => Functor (Compose f g) where
fmap f (Compose x) = Compose (fmap (fmap f) x)
Compose is particularly useful with the GeneralizedNewtypeDeriving extension:
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
newtype ListOfMaybe a = ListOfMaybe (Compose [] Maybe a)
-- Now we can derive Functor and Applicative instances based on those of Compose
deriving (Functor, Applicative)
Note that the composition of two Applicatives is also an Applicative. Therefore, since [] and Maybe are Applicatives, so is Compose [] Maybe and ListOfMaybe. Composing Applicatives is a really neat technique that's slowly becoming more common these days, as an alternative to monad transformers for cases when you don't need the full power of monads.
The composition of two functions is when you put one function inside another function, such as
round (sqrt 23)
This is the composition of the two functions round and sqrt. Similarly, the composition of two functors is when you put one functor inside another functor, such as
Just [3, 5, 6, 2]
List is a functor, and so is Maybe. You can get some intuition for why their composition also is a functor if you try to figure out what fmap should do to the above value. Of course it should map over the contents of the inner functor!
It really helps to think about the categorical interpretation here, a functor F: C -> D takes objects (values) and morphisms (functions) to objects and morphisms from a category C to objects and morphisms in a category D.
For a second functor G : D -> E the composition of functors G . F : C -> E is just taking the codomain of F fmap transformation to be the domain of the G fmap transformation. In Haskell this is accomplished with a little newtype unwrapping.
import Data.Functor
newtype Comp f g a = Comp { unComp :: f (g a) }
compose :: f (g a) -> Comp f g a
compose = Comp
decompose :: Comp f g a -> f (g a)
decompose = unComp
instance (Functor f, Functor g) => Functor (Comp f g) where
fmap foo = compose . fmap (fmap foo) . decompose
Control.Monad.Free implements a free monad as:
data Free f a = Pure a | Free (f (Free f a))
instance Functor f => Functor (Free f) where
fmap f = go where
go (Pure a) = Pure (f a)
go (Free fa) = Free (go <$> fa)
I am having a lot of trouble understanding the second go line, especially in the context of descriptions of what a free monad is. Can somenoe please describe how this works and why it makes Free f a a free monad?
At this point, you're just making Free a functor rather than a monad. Of course, to be a monad, it has to be a functor as well!
I think it would be a little easier to think about if we rename the Free constructor to avoid confusion:
data Free f a = Pure a | Wrap (f (Free f a))
Now let's look at the structure of what we're building up. For the Pure case, we just have a value of type a. For the Wrap case, we have another Free f a value wrapped in the f functor.
Let's ignore the constructors for a second. That is, if we have Wrap (f (Pure a)) let's think of it as f a. This means that the structure we're building up is just f--a functor--applied repeatedly some number of times. Values of this type will look something like: f (f (f (f (f a)))). To make it more concrete, let f be [] to get: [[[[[a]]]]]. We can have as many levels of this as we want by using the Wrap constructor repeatedly; everything ends when we use Pure.
Putting the constructors back in, [[a]] would look like: Wrap [Wrap [Pure a]].
So all we're doing is taking the Pure value and repeatedly applying a functor to it.
Given this structure of a repeatedly applied functor, how would we map a function over it? For the Pure case--before we've wrapped it in f--this is pretty trivial: we just apply the function. But if we've already wrapped our value in f at least once, we have to map over the outer level and then recursively map over all the inner layers. Put another way, we have to map mapping over the Free monad over the functor f.
This is exactly what the second case of go is doing. go itself is just fmap for Free f a. <$> is fmap for f. So what we do is fmap go over f, which makes the whole thing recursive.
Since this mapping function is recursive, it can deal with an arbitrary number of levels. So we can map a function over [[a]] or [[[[a]]]] or whatever. This is why we need to fmap go when go is fmap itself--the important difference being that the first fmap works for a single layer of f and go recursively works for the whole Free f a construction.
I hope this cleared things up a bit.
To tell you the truth, I usually just find it easier not to read the code in these simpler functions, but rather to read the types and then write the function myself. Think of it as a puzzle. You're trying to construct this:
mapFree :: Functor f => (a -> b) -> Free f a -> Free f b
So how do we do it? Well, let's take the Pure constructor first:
mapFree f (Pure a) = ...
-- I like to write comments like these while using Haskell, then usually delete
-- them by the end:
--
-- f :: a -> b
-- a :: a
With the two type comments in there, and knowing the type of Pure, you should see the solution right away:
mapFree f (Pure a) = Pure (f a)
Now the second case:
mapFree f (Free fa) = ...
-- f :: a -> b
-- fa :: Functor f => f (Free f a)
Well, since f is a Functor, we can actually use mapFree to apply mapFree f to the inner component of f (Free f a). So we get:
mapFree f (Free fa) = Free (fmap (mapFree f) fa)
Now, using this definition as the Functor f => Functor (Free f) instance, we get:
instance Functor f => Functor (Free f) where
fmap f (Pure a) = Pure (f a)
fmap f (Free fa) = Free (fmap (fmap f) fa)
With a bit of work, you can verify that the definition we just arrived at here is the same thing as the one you're puzzling over. (As others have mentioned, (<$>) (defined in Control.Applicative) is just a synonym for fmap.) You may still not understand it, but you managed to write it, which for types as abstract as these is very often good enough.
As for understanding it, though, the thing that helps me is the following: think of a Free monad as a sort of list-like structure, with Pure as [] and Free as (:). From the definition of the type you should see this: Pure is the base case, and Free is the recursive case. What the fmap instance is doing is "pushing" the mapped function to the bottom of this structure, to where the Pure lives.
Since I am confused myself, I answer with a question...could this be a correct substitution (relying on Tikhon's Wrap clarification)?
...
fmap g = go where
go (Pure a) = Pure (g a)
go (Wrap fa) = Wrap (go <$> fa)
Substituting "fmap g" for "go", and "fmap" for "<$>" (since "<$>" is infix,
we flip "go" and "<$>"):
fmap g (Pure a) = Pure (g a)
fmap g (Wrap fa) = Wrap (fmap (fmap g) fa)
Substituting "f (Free f a)" for "fa" in the last line (from the first data
declaration):
fmap g (Wrap fa) = Wrap ( fmap (fmap g) (f (Free f a)) )
= Wrap ( f ( fmap g (Free f a) ) )
= wrap ( f (Pure (g a) ) ) --if Free f a is Pure
or
= Wrap ( f ( fmap g (Wrap fa') ) ) --if Free f a is Wrap
The last line includes the recursion "fmap g (Wrap fa')", which would continue
unless Pure is encountered.
I've been trying to "learn me a Haskell" through the online book LYAH.
The author describes the behaviour of Functors of the Applicative type as sort of having the ability to extract a function from one functor and mapping it over a second functor; this is through the <*> function declared for the Applicative type class:
class (Functor f) => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
As a simple example, the Maybe type is an instance of Applicative under the following implementation:
instance Applicative Maybe where
pure = Just
Nothing <*> _ = Nothing
(Just f) <*> something = fmap f something
An example of the behaviour mentioned previously:
ghci> Just (*2) <*> Just 10 -- evaluates to Just 20
so the <*> operator "extracts" the (*2) function from the first operand and maps it over the second operand.
Now in Applicative types, both operands of <*> are of the same type, so I thought as an exercise why not try implementing a generalisation of this behaviour, where the two operands are Functors of different types, so I could evaluate something like this:
Just (2*) <*:*> [1,2,3,4] -- should evaluate to [2,4,6,8]
So this is what I came up with:
import Control.Applicative
class (Applicative f, Functor g) => DApplicative f g where
pure1 :: a -> f a
pure1 = pure
(<*:*>) :: f ( a -> b ) -> g a -> g b -- referred below as (1)
instance DApplicative Maybe [] where -- an "instance pair" of this class
(Just func) <*:*> g = fmap func g
main = do putStrLn(show x)
where x = Just (2*) <*:*> [1,2,3,4] -- it works, x equals [2,4,6,8]
Now, although the above works, I'm wondering if we can do better; is it possible to give a default implementation for <*:*> that can be applied to a variety of f & g pairs, in the declaration for DApplicative f g itself? And this leads me to the following question: Is there a way to pattern match on constructors across different data types?
I hope my questions make some sense and I'm not just spewing nonsense (if I am, please don't be too harsh; I'm just an FP beginner up way past his bedtime...)
This does make sense, but it ends up being not particularly useful in its current form. The problem is exactly what you've noticed: there is no way to provide a default which does sensible things with different types, or to generally convert from f to g. So you'd have a quadratic explosion in the number of instances you'd need to write.
You didn't finish the DApplicative instance. Here's a full implementation for Maybe and []:
instance DApplicative Maybe [] where -- an "instance pair" of this class
(Just func) <*:*> g = fmap func g
Nothing <*:*> g = []
This combines the behaviors of both Maybe and [], because with Just it does what you expect, but with Nothing it returns nothing, an empty list.
So instead of writing DApplicative which takes two different types, what if you had a way of combining two applicatives f and g into a single type? If you generalize this action, you could then use a standard Applicative with the new type.
This could be done with the standard formulation of Applicatives as
liftAp :: f (g (a -> b)) -> f (g a) -> f (g b)
liftAp l r = (<*>) <$> l <*> r
but instead let's change Maybe:
import Control.Applicative
newtype MaybeT f a = MaybeT { runMaybeT :: f (Maybe a) }
instance (Functor f) => Functor (MaybeT f) where
fmap f (MaybeT m) = MaybeT ((fmap . fmap) f m)
instance (Applicative f) => Applicative (MaybeT f) where
pure a = MaybeT (pure (pure a))
(MaybeT f) <*> (MaybeT m) = MaybeT ( (<*>) <$> f <*> m)
Now you just need a way to convert something in the inner applicative, f, into the combined applicative MaybeT f:
lift :: (Functor f) => f a -> MaybeT f a
lift = MaybeT . fmap Just
This looks like a lot of boilerplate, but ghc can automatically derive nearly all of it.
Now you can easily use the combined functions:
*Main Control.Applicative> runMaybeT $ pure (*2) <*> lift [1,2,3,4]
[Just 2,Just 4,Just 6,Just 8]
*Main Control.Applicative> runMaybeT $ MaybeT (pure Nothing) <*> lift [1,2,3,4]
[Nothing,Nothing,Nothing,Nothing]
This behavior at Nothing may be surprising, but if you think of a list as representing indeterminism you can probably see how it could be useful. If you wanted the dual behavior of returning either Just [a] or Nothing, you just need a transformed List ListT Maybe a.
This isn't quite the same as the instance I wrote for DApplicative. The reason is because of the types. DApplicative converts an f into a g. That's only possible when you know the specific f and g. To generalize it, the result needs to combine the behaviors of both f and g as this implementation does.
All of this works with Monads too. Transformed types such as MaybeT are provided by monad transformer libraries such as mtl.
Your DApplicative instance for Maybe is not complete: What should happen if the first argument of <*:*> is Nothing?
The choice what to do for every combination isn't clear. For two lists <*> would generate all combinations: [(+2),(+10)] <*> [3,4] gives [5,6,13,14]. For two ZipLists you have a zip-like behaviour: (ZipList [(+2),(+10)]) <*> (ZipList [3,4]) gives [5,14]. So you have to choose one of both possible behaviours for a DApplicative of a list and a ZipList, there is no "correct" version.