Exercise 5 of the Haskell Typeclassopedia Section 3.2 asks for a proof or counterexample on the statement
The composition of two Functors is also a Functor.
I thought at first that this was talking about composing the fmap methods defined by two separate instances of a Functor, but that doesn't really make sense, since the types wouldn't match up as far as I can tell. For two types f and f', the types of fmap would be fmap :: (a -> b) -> f a -> f b and fmap :: (a -> b) -> f' a -> f' b, and that doesn't really seem composable. So what does it mean to compose two Functors?
A Functor gives two mappings: one on the type level mapping types to types (this is the x in instance Functor x where), and one on the computation level mapping functions to functions (this is the x in fmap = x). You are thinking about composing the computation-level mapping, but should be thinking about composing the type-level mapping; e.g., given
newtype Compose f g x = Compose (f (g x))
can you write
instance (Functor f, Functor g) => Functor (Compose f g)
? If not, why not?
What this is talking about is the composition of type constructors like [] and Maybe, not the composition of functions like fmap. So for example, there are two ways of composing [] and Maybe:
newtype ListOfMabye a = ListOfMaybe [Maybe a]
newtype MaybeOfList a = MaybeOfList (Maybe [a])
The statement that the composition of two Functors is a Functor means that there is a formulaic way of writing a Functor instance for these types:
instance Functor ListOfMaybe where
fmap f (ListOfMaybe x) = ListOfMaybe (fmap (fmap f) x)
instance Functor MaybeOfList where
fmap f (MaybeOfList x) = MaybeOfList (fmap (fmap f) x)
In fact, the Haskell Platform comes with the module Data.Functor.Compose that gives you a Compose type that does this "for free":
import Data.Functor.Compose
newtype Compose f g a = Compose { getCompose :: f (g a) }
instance (Functor f, Functor g) => Functor (Compose f g) where
fmap f (Compose x) = Compose (fmap (fmap f) x)
Compose is particularly useful with the GeneralizedNewtypeDeriving extension:
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
newtype ListOfMaybe a = ListOfMaybe (Compose [] Maybe a)
-- Now we can derive Functor and Applicative instances based on those of Compose
deriving (Functor, Applicative)
Note that the composition of two Applicatives is also an Applicative. Therefore, since [] and Maybe are Applicatives, so is Compose [] Maybe and ListOfMaybe. Composing Applicatives is a really neat technique that's slowly becoming more common these days, as an alternative to monad transformers for cases when you don't need the full power of monads.
The composition of two functions is when you put one function inside another function, such as
round (sqrt 23)
This is the composition of the two functions round and sqrt. Similarly, the composition of two functors is when you put one functor inside another functor, such as
Just [3, 5, 6, 2]
List is a functor, and so is Maybe. You can get some intuition for why their composition also is a functor if you try to figure out what fmap should do to the above value. Of course it should map over the contents of the inner functor!
It really helps to think about the categorical interpretation here, a functor F: C -> D takes objects (values) and morphisms (functions) to objects and morphisms from a category C to objects and morphisms in a category D.
For a second functor G : D -> E the composition of functors G . F : C -> E is just taking the codomain of F fmap transformation to be the domain of the G fmap transformation. In Haskell this is accomplished with a little newtype unwrapping.
import Data.Functor
newtype Comp f g a = Comp { unComp :: f (g a) }
compose :: f (g a) -> Comp f g a
compose = Comp
decompose :: Comp f g a -> f (g a)
decompose = unComp
instance (Functor f, Functor g) => Functor (Comp f g) where
fmap foo = compose . fmap (fmap foo) . decompose
Related
While I was learning Composing Types chapter from Haskell Book, I was given tasks to write Functor and Applicative instances for the following type.
newtype Compose f g a = Compose { getCompose :: f (g a) }
I wrote the following definitions
Functor:
fmap f (Compose fga) = Compose $ (fmap . fmap) f fga
Applicative:
(Compose f) <*> (Compose a) = Compose $ (<*>) <$> f <*> a
I learned that composing two Functors or Applicatives gives Functor and Applicative respectively.
The author also explained it is not possible to compose two Monads the same way. So we use Monad Transformers. I just do not want to read Monad Transformers unless I'm clear with why Monads do not compose.
So far I tried to write bind function like this:
Monad:
(>>=) :: Compose f g a -> (a -> Compose f g b) -> Compose f g b
(Compose fga) >>= h = (fmap.fmap) h fga
and of course got this error from GHC
Expected type: Compose f g b
Actual type: f (g (Compose f g b))
If I can strip the outermost f g somehow, the composition gives us a monad right? (I still couldn't figure out how to strip that though)
I tried reading answers from other Stack Overflow questions like this, but all answers are more theoretical or some Math. I still haven't learned why Monads do not compose. Can somebody explain me without using Math?
I think this is easiest to understand by looking at the join operator:
join :: Monad m => m (m a) -> m a
join is an alternative to >>= for defining a Monad, and is a little easier to reason about. (But now you have an exercise to do: show how to implement >>= from join, and how to implement join from >>=!)
Let's try to make a join operation for Composed f g and see what goes wrong. Our input is essentially a value of type f (g (f (g a))), and we want to produce a value of type f (g a). We also know that we have join for f and g individually, so if we could get a value of type f (f (g (g a))), then we could hit it with fmap join . join to get the f (g a) we wanted.
Now, f (f (g (g a))) isn't so far from f (g (f (g a))). All we really need is a function like this: distribute :: g (f a) -> f (g a). Then we could implement join like this:
join = Compose . fmap join . join . fmap (distribute . fmap getCompose) . getCompose
Note: there are some laws that we would want distribute to satisfy, in order to make sure that the join we get here is lawful.
Ok, so that shows how we can compose two monads if we have a distributive law distribute :: (Monad f, Monad g) => g (f a) -> f (g a). Now, it could be true that every pair of monads has a distributive law. Maybe we just have to think really hard about how to write one down?
Unfortunately there are pairs of monads that don't have a distributive law. So we can answer your original question by producing two monads that definitely don't have a way of turning a g (f a) into an f (g a). These two monads will witness to the fact that monads don't compose in general.
I claim that g = IO and f = Maybe do not have a distributive law
-- Impossible!
distribute :: IO (Maybe a) -> Maybe (IO a)
Let's think about why such a thing should be impossible. The input to this function is an IO action that goes out into the real world and eventually produces Nothing or a Just x. The output of this function is either Nothing, or Just an IO action that, when run, eventually produces x. To produce the Maybe (IO a), we would have to peek into the future and predict what the IO (Maybe a) action is going to do!
In summary:
Monads can compose if there is a distributive law g (f a) -> f (g a). (but see the addendum below)
There are some monads that don't have such a distributive law.
Some monads can compose with each other, but not every pair of monads can compose.
Addendum: "if", but what about "only if"? If all three of F, G, and FG are monads, then you can construct a natural transformation δ : ∀X. GFX -> FGX as the composition of GFη_X : GFX -> GFGX followed by η_{GFGX} : GFGX -> FGFGX and then by μ_X : FGFGX -> FGX. In Haskellese (with explicit type applications for clarity), that would be
delta :: forall f g x. (Monad f, Monad g, Monad (Compose f g))
=> g (f x) -> f (g x)
delta = join' . pure #f . fmap #g (fmap #f (pure #g))
where
-- join for (f . g), via the `Monad (Compose f g)` instance
join' :: f (g (f (g x))) -> f (g x)
join' = getCompose . join #(Compose f g) . fmap Compose . Compose
So if the composition FG is a monad, then you can get a natural transformation with the right shape to be a distributive law. However, there are some extra constraints that fall out of making sure your distributive law satisfies the correct properties, vaguely alluded to above. As always, the n-Category Cafe has the gory details.
I'm learning Haskell and trying to do exercises from book Haskell Programming from first principles and I'm stack trying to write applicative for Pair type
data Pair a = Pair a a deriving Show
I have seen some other examples on web but I'm trying somewhat different applicative functor, I'm trying to utilize monoidal structure of this type. Here is what I have
data Pair a = Pair a a deriving (Show, Eq)
instance Functor Pair where
fmap f (Pair x y) = Pair (f x) (f y)
instance Semigroup a => Semigroup (Pair a) where
(Pair x y) <> (Pair x' y') = Pair (x <> x') (y <> y')
instance Applicative Pair where
pure x = Pair x x
(Pair f g) <*> p = fmap f p <> fmap g p
Unfortunately this will not compile:
* No instance for (Semigroup b) arising from a use of `<>'
Possible fix:
add (Semigroup b) to the context of
the type signature for:
(<*>) :: forall a b. Pair (a -> b) -> Pair a -> Pair b
* In the expression: fmap f p <> fmap g p
In an equation for `<*>': (Pair f g) <*> p = fmap f p <> fmap g p
In the instance declaration for `Applicative Pair'
And this is where I'm stack; I don't see how can I add typeclass constraint to Applicative definition and I thought that making type Pair instance of Semigroup is enough.
Other solutions that I have seen are like
Pair (f g) <*> Pair x y = Pair (f x) (g y)
but these solutions don't utilize monoidal part of Pair type
Is it even possible to make this applicative the way I't trying?
Although it's true that Applicative is the class representing monoidal functors (specifically, Hask endofunctors which are monoidal), Allen&Moronuki present this unfortunately in a way that seems to suggest a direct relation between the Monoid and Applicative classes. There is, in general, no such relation! (The Writer type does define one particular Applicative instance based on the Monoid class, but that's an extremely special case.)
This spawned a rather extended discussion at another SO question.
What the “monoidal” in “monoidal functor” refers to is a monoidal structure on the category's objects, i.e. on Haskell types. Namely, you can combine any two types to a tuple-type. This has per se nothing whatsoever to do with the Monoid class, which is about combining two values of a single type to a value of the same type.
Pair does allow an Applicative instance, but you can't base it on the Semigroup instance, although the definition actually looks quite similar:
instance Applicative Pair where
pure x = Pair x x
Pair f g <*> Pair p q = Pair (f p) (g q)
However, you can now define the Semigroup instance in terms of this:
instance Semigroup a => Semigroup (Pair a) where
(<>) = liftA2 (<>)
That, indeed, is a valid Semigroup instance for any applicative, but it's usually not the definition you want (often, containers have a natural combination operation that never touches the contained elements, e.g. list concatenation).
I don't think that Pair is an Applicative the way you want it to be, Applicative states that
(<*>) :: f (a -> b) -> f a -> f b
should work for all functions in first position whereas you want
(<*>) :: Semigroup b => f (a -> b) -> f a -> f b.
If Pair was always a Semigroup (like Maybe or List for example) your reasoning would be sound, but you need the pre-requisite of the Pair-containee to be Semigroup.
Correct: Pair can't be made an Applicative in the way you want, because Applicative f demands that f a "feel applicative-y" for any a, even non-Semigroup as. Consider writing an alternative class and implementing it:
class CApplicative f where
type C f
pure :: C f a => a -> f a
app :: C f b => f (a -> b) -> f a -> f b
instance CApplicative Pair where
type C Pair = Semigroup
pure x = Pair x x
app (Pure f g) p = fmap f p <> fmap g p
This can be derive since base 4.17.0.0, which includes the via types Generically and Generically1. This will derive the same instances as leftaroundabout writes:
{-# Language DeriveGeneric #-}
{-# Language DerivingStrategies #-}
{-# Language DerivingVia #-}
import Data.Monoid
import GHC.Generics
data Pair a = Pair a a
deriving
stock (Generic, Generic1)
deriving (Semigroup, Monoid)
via Generically (Pair a)
deriving (Functor, Applicative)
via Generically1 Pair
Alternatively you can lift Semigroup and Monoid through the Applicative
deriving (Semigroup, Monoid, Num)
via Ap Pair a
Consider the following signature of foldMap
foldMap :: (Foldable t, Monoid m) => (a -> m) -> t a -> m
This is very similar to "bind", just with the arguments swapped:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
It seems to me that there therefore must be some sort of relationship between Foldable, Monoid and Monad, but I can't find it in the superclasses. Presumably I can transform one or two of these into the other but I'm not sure how.
Could that relationship be detailed?
Monoid and Monad
Wow, this is actually one of the rare times we can use the quote:
A monad is just a monoid in the category of endofunctors, [...]
Let's start with a monoid. A monoid in the category Set of sets is a set of elements m with an empty element mempty and an associative function mappend to combine elements such that
mempty `mappend` x == x -- for any x
x `mappend` mempty == x -- for any x
-- and
a `mappend` (b `mappend` c) == (a `mappend` b) `mappend` c -- for all a, b, c
Note that a monoid is not limited to sets, there also exist monoids in the category Cat of categories (monads) and so on. Basically anytime you have an associative binary operation and an identity for it.
Now a monad, which is a "monoid in the category of endofunctors" has following properties:
It's an endofunctor, that means it has type * -> * in the Category Hask of Haskell types.
Now, to go further you must know a little bit of category theory I will try to explain here: Given two functors F and G, there exists a natural transformation from F to G iff there exists a function α such that every F a can be mapped to a G a. α can be many-to-one, but it has to map every element of F a. Roughly said, a natural transformation is a function between functors.
Now in category theory, there can be many functors between two categories. Ina simplified view it can be said that we don't even care about which functors map from where to where, we only care about the natural transformations between them.
Coming back to monad, we can now see that a "monoid in the category of endofunctors" must posess two natural transformations. Let's call our monad endofunctor M:
A natural transformation from the identity (endo)functor to the monad:
η :: 1 -> M -- this is return
And a natural transformation from the conposition of two monads and produce a third one:
μ :: M × M -> M
Since × is the composition of functors, we can (roughly speaking) also write:
μ :: m a × m a -> m a
μ :: (m × m) a -> m a
μ :: m (m a) -> m a -- join in Haskell
Satisfying these laws:
μ . M μ == μ . μ M
μ . M η == μ . η M
So, a monad is a special case of a monoid in the category of endofunctors. You can't write a monoid instance for monad in normal Haskell, since Haskell's notion of composition is too weak (I think; This is because functions are restricted to Hask and it's weaker than Cat). See this for more information.
What about Foldable?
Now as for Foldable: there exist definitions of folds using a custom binary function to combine the elements. Now you could of course supply any function to combine elements, or you could use an existing concept of combining elements, the monoid. Again, please note that this monoid restricted to the set monoid, not the catorical definition of monoid.
Since the monoid's mappend is associative, foldl and foldr yield the same result, which is why the folding of monoids can be reduced to fold :: Monoid m, Foldable t => t m -> m. This is an obvious connection between monoid and foldable.
#danidiaz already pointed out the connection between Applicative, Monoid and Foldable using the Const functor Const a b = Const a, whose applicative instance requires the first parameter of Const to be a monoid (no pure without mempty (disregarding undefined)).
Comparing monad and foldable is a bit of a stretch in my opinion, since monad is more powerful than foldable in the sense that foldable can only accumulate a list's values according to a mapping function, but the monad bind can structurally alter the context (a -> m b).
Summary: (>>=) and traverse look similar because they both are arrow mappings of functors, while foldMap is (almost) a specialised traverse.
Before we begin, there is one bit of terminology to explain. Consider fmap:
fmap :: Functor f => (a -> b) -> (f a -> f b)
A Haskell Functor is a functor from the Hask category (the category with Haskell functions as arrows) to itself. In category theory terms, we say that the (specialised) fmap is the arrow mapping of this functor, as it is the part of the functor that takes arrows to arrows. (For the sake of completeness: a functor consists of an arrow mapping plus an object mapping. In this case, the objects are Haskell types, and so the object mapping takes types to types -- more specifically, the object mapping of a Functor is its type constructor.)
We will also want to keep in mind the category and functor laws:
-- Category laws for Hask:
f . id = id
id . f = f
h . (g . f) = (h . g) . f
-- Functor laws for a Haskell Functor:
fmap id = id
fmap (g . f) = fmap g . fmap f
In what follows, we will work with categories other than Hask, and functors which are not Functors. In such cases, we will replace id and (.) by the appropriate identity and composition, fmap by the appropriate arrow mapping and, in one case, = by an appropriate equality of arrows.
(=<<)
To begin with the more familiar part of the answer, for a given monad m the a -> m b functions (also known as Kleisli arrows) form a category (the Kleisli category of m), with return as identity and (<=<) as composition. The three category laws, in this case, are just the monad laws:
f <=< return = f
return <=< f = f
h <=< (g <=< f) = (h <=< g) <=< f
Now, your asked about flipped bind:
(=<<) :: Monad m => (a -> m b) -> (m a -> m b)
It turns out that (=<<) is the arrow mapping of a functor from the Kleisli category of m to Hask. The functor laws applied to (=<<) amount to two of the monad laws:
return =<< x = x -- right unit
(g <=< f) =<< x = g =<< (f =<< x) -- associativity
traverse
Next, we need a detour through Traversable (a sketch of a proof of the results in this section is provided at the end of the answer). First, we note that the a -> f b functions for all applicative functors f taken at once (as opposed to one at each time, as when specifying a Kleisli category) form a category, with Identity as identity and Compose . fmap g . f as composition. For that to work, we also have to adopt a more relaxed equality of arrows, which ignores the Identity and Compose boilerplate (which is only necessary because I am writing this in pseudo-Haskell, as opposed to proper mathematical notation). More precisely, we will consider that that any two functions that can be interconverted using any composition of the Identity and Compose isomorphisms as equal arrows (or, in other words, we will not distinguish between a and Identity a, nor between f (g a) and Compose f g a).
Let's call that category the "traversable category" (as I cannot think of a better name right now). In concrete Haskell terms, an arrow in this category is a function which adds an extra layer of Applicative context "below" any previous existing layers. Now, consider traverse:
traverse :: (Traversable t, Applicative f) => (a -> f b) -> (t a -> f (t b))
Given a choice of traversable container, traverse is the arrow mapping of a functor from the "traversable category" to itself. The functor laws for it amount to the traversable laws.
In short, both (=<<) and traverse are analogues of fmap for functors involving categories other than Hask, and so it is not surprising that their types are a bit similar to each other.
foldMap
We still have to explain what all of that has to do with foldMap. The answer is that foldMap can be recovered from traverse (cf. danidiaz's answer -- it uses traverse_, but as the applicative functor is Const m the result is essentially the same):
-- cf. Data.Traversable
foldMapDefault :: (Traversable t, Monoid m) => (a -> m) -> (t a -> m)
foldMapDefault f = getConst . traverse (Const . f)
Thanks to the const/getConst isomorphism, this is clearly equivalent to:
foldMapDefault' :: (Traversable t, Monoid m)
=> (a -> Const m b) -> (t a -> Const m (t b))
foldMapDefault' f = traverse f
Which is just traverse specialised to the Monoid m => Const m applicative functors. Even though Traversable is not Foldable and foldMapDefault is not foldMap, this provides a decent justification for why the type of foldMap resembles that of traverse and, transitively, that of (=<<).
As a final observation, note that the arrows of the "traversable category" with applicative functor Const m for some Monoid m do not form a subcategory, as there is no identity unless Identity is among the possible choices of applicative functor. That probably means there is nothing else of interest to say about foldMap from the perspective of this answer. The only single choice of applicative functor that gives a subcategory is Identity, which is not at all surprising, given how a traversal with Identity amounts to fmap on the container.
Appendix
Here is a rough sketch of the derivation of the traverse result, yanked from my notes from several months ago with minimal editing. ~ means "equal up to (some relevant) isomorphism".
-- Identity and composition for the "traversable category".
idT = Identity
g .*. f = Compose . fmap g . f
-- Category laws: right identity
f .*. idT ~ f
f .*. idT
Compose . fmap f . idT
Compose . fmap f . Identity
Compose . Identity . f
f -- using getIdentity . getCompose
-- Category laws: left identity
idT .*. f ~ f
idT .*. f
Compose . fmap Identity . f
f -- using fmap getIdentity . getCompose
-- Category laws: associativity
h .*. (g .*. f) ~ (h .*. g) .*. f
h .*. (g .*. f) -- LHS
h .*. (Compose . fmap g . f)
Compose . fmap h . (Compose . fmap g . f)
Compose . Compose . fmap (fmap h) . fmap g . f
(h .*. g) .*. f -- RHS
(Compose . fmap h . g) .*. f
Compose . fmap (Compose . fmap h . g) . f
Compose . fmap (Compose . fmap h) . fmap g . f
Compose . fmap Compose . fmap (fmap h) . fmap g . f
-- using Compose . Compose . fmap getCompose . getCompose
Compose . Compose . fmap (fmap h) . fmap g . f -- RHS ~ LHS
-- Functor laws for traverse: identity
traverse idT ~ idT
traverse Identity ~ Identity -- i.e. the identity law of Traversable
-- Functor laws for traverse: composition
traverse (g .*. f) ~ traverse g .*. traverse f
traverse (Compose . fmap g . f) ~ Compose . fmap (traverse g) . traverse f
-- i.e. the composition law of Traversable
When a container is Foldable, there is a relationship between foldMap and Applicative (which is a superclass of Monad).
Foldable has a function called traverse_, with signature:
traverse_ :: Applicative f => (a -> f b) -> t a -> f ()
One possible Applicative is Constant. To be an Applicative, it requires the "accumulator" parameter to be a Monoid:
newtype Constant a b = Constant { getConstant :: a } -- no b value at the term level!
Monoid a => Applicative (Constant a)
for example:
gchi> Constant (Sum 1) <*> Constant (Sum 2) :: Constant (Sum Int) whatever
Constant (Sum {getSum = 3})
We can define foldMap in terms of traverse_ and Constant this way:
foldMap' :: (Monoid m, Foldable t) => (a -> m) -> t a -> m
foldMap' f = getConstant . traverse_ (Constant . f)
We use traverse_ to go through the container, accumulating values with Constant, and then we use getConstant to get rid of the newtype.
There is a lot of talk about Applicative not needing its own transformer class, like this:
class AppTrans t where
liftA :: Applicative f => f a -> t f a
But I can define applicative transformers that don't seem to be compositions of applicatives! For example sideeffectful streams:
data MStream f a = MStream (f (a, MStream f a))
Lifting just performs the side effect at every step:
instance AppTrans MStream where
liftA action = MStream $ (,) <$> action <*> pure (liftA action)
And if f is an applicative, then MStream f is as well:
instance Functor f => Functor (MStream f) where
fmap fun (MStream stream) = MStream $ (\(a, as) -> (fun a, fmap fun as)) <$> stream
instance Applicative f => Applicative (MStream f) where
pure = liftA . pure
MStream fstream <*> MStream astream = MStream
$ (\(f, fs) (a, as) -> (f a, fs <*> as)) <$> fstream <*> astream
I know that for any practical purposes, f should be a monad:
joinS :: Monad m => MStream m a -> m [a]
joinS (MStream stream) = do
(a, as) <- stream
aslist <- joinS as
return $ a : aslist
But while there is a Monad instance for MStream m, it's inefficient. (Or even incorrect?) The Applicative instance is actually useful!
Now note that usual streams arise as special cases for the identity functor:
import Data.Functor.Identity
type Stream a = MStream Identity a
But the composition of Stream and f is not MStream f! Rather, Compose Stream f a is isomorphic to Stream (f a).
I'd like to know whether MStream is a composition of any two applicatives.
Edit:
I'd like to offer a category theoretic viewpoint. A transformer is a "nice" endofunctor t on the category C of applicative functors (i.e. lax monoidal functors with strength), together with a natural transformation liftA from the identity on C to t. The more general question is now what useful transformers exist that are not of the form "compose with g" (where g is an applicative). My claim is that MStream is one of them.
Great question! I believe there are two different parts of this question:
Composing existing applicatives or monads into more complex ones.
Constructing all applicatives/monads from some given starting set.
Ad 1.: Monad transformers are essential for combining monads. Monads don't compose directly. It seems that there needs to be an extra bit of information provided by monad transformers that tells how each monad can be composed with other monads (but it could be this information is already somehow present, see Is there a monad that doesn't have a corresponding monad transformer?).
On the other hand, applicatives compose directly, see Data.Functor.Compose. This is why don't need applicative transformers for composition. They're also closed under product (but not coproduct).
For example, having infinite streams data Stream a = Cons a (Stream a) and another applicative g, both Stream (g a) and g (Stream a) are applicatives.
But even though Stream is also a monad (join takes the diagonal of a 2-dimensional stream), its composition with another monad m won't be, neither Stream (m a) nor m (Stream a) will always be a monad.
Furthermore as we can see, they're both different from your MStream g (which is very close to ListT done right), therefore:
Ad 2.: Can all applicatives be constructed from some given set of primitives? Apparently not. One problem is constructing sum data types: If f and g are applicatives, Either (f a) (g a) won't be, as we don't know how to compose Right h <*> Left x.
Another construction primitive is taking a fixed point, as in your MStream example. Here we might attempt to generalize the construction by defining something like
newtype Fix1 f a = Fix1 { unFix1 :: f (Fix1 f) a }
instance (Functor (f (Fix1 f))) => Functor (Fix1 f) where
fmap f (Fix1 a) = Fix1 (fmap f a)
instance (Applicative (f (Fix1 f))) => Applicative (Fix1 f) where
pure k = Fix1 (pure k)
(Fix1 f) <*> (Fix1 x) = Fix1 (f <*> x)
(which requires not-so-nice UndecidableInstances) and then
data MStream' f g a = MStream (f (a, g a))
type MStream f = Fix1 (MStream' f)
We can have two types f, g :: * -> * such that they're not monads, but their composition is. For example for an arbitrary fixed s:
f a := s -> a
g a := (s, a)
g a isn't a monad (unless we restrict s to a monoid), but f (g a) is the state monad s -> (s, a). (Unlike functors and applicative functors, even if both f and g were monads, their composition might not be.)
Is there a similar example for functors or applicative functors? That is that the composition of f and g is a a functor (or an applicative functor), even though
one of f and g isn't an (applicative) functor and the other is, or
neither of them is an (applicative) functor,
This is not a (covariant) functor
f x = x -> r
but f . f is the "continuation" functor (also a monad):
f (f x) = (x -> r) -> r
This is probably not the best example because f is a contravariant functor.
Let g :: *->*. Then Const A . g is a functor for any A, in fact isomorphic to Const A.