Working on a project and trying to write longzip using an anamorphism. I'm having some trouble writing a coalgebra for this use case. I've defined my anamorphism in terms of Fix below:
-- Fixed point of a Functor
newtype Fix f = In (f (Fix f))
deriving instance (Eq (f (Fix f))) => Eq (Fix f)
out :: Fix f -> f (Fix f)
out (In f) = f
-- Anamorphism
type Coalgebra f a = a -> f a
ana :: (Functor f) => Coalgebra f a -> a -> Fix f
ana f = In . fmap (ana f) . f
This is the definition for ana derived from "reversing the arrows" of cata:
-- Catamorphism
type Algebra f a = f a -> a
cata :: (Functor f) => Algebra f a -> Fix f -> a
cata f = f . fmap (cata f) . out
I've seen zip written using a version of ana that is obviously defined differently (takes a predicate as a parameter):
zip2 = ana unsp fin
where
fin (as,bs) = (as==[]) || (bs ==[])
unsp ((a:as), (b:bs)) = ((a,b),(as,bs))
(Taken from https://en.wikipedia.org/wiki/Anamorphism)
But I'm unsure how to move forward using the version of ana defined above, particularly in regards to writing a Coalgebra of type a -> fa. Like, would it have to take the two list parameters to zip and combine them into a single a?
First off, give yourself a starting point. Write down what you're going to do:
-- make this example actually complete
{-# Language StandaloneDeriving, UndecidableInstances, DeriveFunctor #-}
import Prelude hiding (zip)
-- Fixed point of a Functor
newtype Fix f = In (f (Fix f))
deriving instance (Eq (f (Fix f))) => Eq (Fix f)
deriving instance (Show (f (Fix f))) => Show (Fix f)
out :: Fix f -> f (Fix f)
out (In f) = f
-- Anamorphism
type Coalgebra f a = a -> f a
ana :: (Functor f) => Coalgebra f a -> a -> Fix f
ana f = In . fmap (ana f) . f
-- Base functor for a list
data ListF a f = Nil | Cons a f deriving (Eq, Show, Functor)
type List a = Fix (ListF a)
-- write down the type. It helps you think about things
zip :: List a -> List b -> List (a, b)
zip x y = ana undefined undefined
You know zip has to be implemented as a call to ana, so what remains is figuring out the seed and the coalgebra. It should be fairly clear the seed needs to contain x and y. It doesn't seem like any further information should be necessary, so let's just assume the seed will be (x, y) until/unless it poses a problem. That's enough information to pin down the types for this first try:
zip :: List a -> List b -> List (a, b)
zip x y = ana zipCoalgebra (x, y)
zipCoalgebra :: (List a, List b) -> ListF (a, b) (List a, List b)
zipCoalgebra = undefined
I feel like this is the step you've missed: writing down what you're trying to do and following the types to pin down what you need. The rest of this is sort of trivial if you've ever seen any implementation of zip. It's a matter of writing down the most boring thing that type-checks (paying close attention to the difference between List and ListF in the type). I strongly recommend you stop reading here and give it a try yourself. There isn't much else I can say that's actually helpful for learning how to think about this.
If you really have no idea at all:
zipCoalgebra :: (List a, List b) -> ListF (a, b) (List a, List b)
zipCoalgebra (In (Cons a as), In (Cons b bs)) = Cons (a, b) (as, bs)
zipCoalgebra _ = Nil
It really is what you'd expect if you've ever seen an implementation of zip before, with the necessary noise to make it type-check around Fix. Let's give it a spin:
ghci> zip (In (Cons 1 (In Nil))) (In Nil)
In Nil
ghci> zip (In (Cons 1 (In Nil))) (In (Cons 2 (In Nil)))
In (Cons (1,2) (In Nil))
Yep. Behaving as expected. It seems the two lists were sufficient as the seed, and all is well.
Related
In the recursion-schemes package the following types are defined:
newtype Fix f = Fix (f (Fix f))
newtype Mu f = Mu (forall a. (f a -> a) -> a)
Are they isomorphic? If so, how do you prove it?
Are they isomorphic?
Yes, they are isomorphic in Haskell. See What is the difference between Fix, Mu and Nu in Ed Kmett's recursion scheme package for some additional remarks.
If so, how do you prove it?
Let's begin by defining functions to perform the conversions:
muToFix :: Mu f -> Fix f
muToFix (Mu s) = s Fix
fixToMu :: Functor f => Fix f -> Mu f
fixToMu t = Mu (\alg -> cata alg t)
To show those functions witness an isomorphism, we must show that:
muToFix . fixToMu = id
fixToMu . muToFix = id
From Fix and back
One of the directions of the isomorphism comes off somewhat more straightforwardly than the other:
muToFix (fixToMu t) = t
muToFix (fixToMu t) -- LHS
muToFix (Mu (\f -> cata f t))
(\f -> cata f t) Fix
cata Fix t -- See below.
t -- LHS = RHS
The final passage above, cata Fix t = t, can be verified through the definition of cata:
cata :: Functor f => (f a -> a) -> Fix f -> a
cata alg = alg . fmap (cata alg) . unfix
cata Fix t, then, is Fix (fmap (cata Fix) (unfix t)). We can use induction to show it must be t, at least for a finite t (it gets more subtle with infinite structures -- see the addendum at the end of this answer). There are two possibilities to consider:
unfix t :: f (Fix f) is empty, having no recursive positions to dig into. In that case, it must be equal to fmap absurd z for some z :: f Void, and thus:
cata Fix t
Fix (fmap (cata Fix) (unfix t))
Fix (fmap (cata Fix) (fmap absurd z))
Fix (fmap (cata Fix . absurd) z)
-- fmap doesn't do anything on an empty structure.
Fix (fmap absurd z)
Fix (unfix t)
t
unfix t is not empty. In that case, we at least know that fmap (cata Fix) can't do anything beyond applying cata Fix on the recursive positions. The induction hypothesis here is that doing so will leave those positions unchanged. We then have:
cata Fix t
Fix (fmap (cata Fix) (unfix t))
Fix (unfix t) -- Induction hypothesis.
t
(Ultimately, cata Fix = id is a corollary of Fix :: f (Fix f) -> Fix x being an initial F-algebra. Resorting directly to that fact
in the context of this proof would probably be too much of a shortcut.)
From Mu and back
Given muToFix . fixToMu = id, to prove that fixToMu . muToFix = id it suffices to prove either:
that muToFix is injective, or
that fixToMu is surjective.
Let's take the second option, and review the relevant definitions:
newtype Mu f = Mu (forall a. (f a -> a) -> a)
fixToMu :: Functor f => Fix f -> Mu f
fixToMu t = Mu (\alg -> cata alg t)
fixToMu being surjective, then, means that, given any specific Functor f, all functions of type forall a. (f a -> a) -> a can be defined as \alg -> cata alg t, for some specific t :: Fix f. The task, then, becomes cataloguing the forall a. (f a -> a) -> a functions and seeing whether all of them can be expressed in that form.
How might we define a forall a. (f a -> a) -> a function without leaning on fixToMu? No matter what, it must involve using the f a -> a algebra supplied as an argument to get an a result. The direct route would be applying it to some f a value. A major caveat is that, since a is polymorphic, we must be able to conjure said f a value for any choice of a. That is a feasible strategy as long as f-values happen to exist. In that case, we can do:
fromEmpty :: Functor f => f Void -> forall a. (f a -> a) -> a
fromEmpty z = \alg -> alg (fmap absurd z)
To make the notation clearer, let's define a type for things we can use to define forall a. (f a -> a) -> a functions:
data Moo f = Empty (f Void)
fromMoo :: Functor f => Moo f -> forall a. (f a -> a) -> a
fromMoo (Empty z) = \alg -> alg (fmap absurd z)
Besides the direct route, there is just one other possibility. Given that f is a Functor, if we somehow have an f (Moo f) value we can apply the algebra twice, the first application being under the outer f layer, via fmap and fromMoo:
fromLayered :: Functor f => f (Moo f) -> forall a. (f a -> a) -> a
fromLayered u = \alg -> alg (fmap (\moo -> fromMoo moo alg) u)
Considering that we can also make forall a. (f a -> a) -> a out of f (Moo f) values, it makes sense to add them as a case of Moo:
data Moo f = Empty (f Void) | Layered (f (Moo f))
Accordingly, fromLayered can be incorporated to fromMoo:
fromMoo :: Functor f => Moo f -> forall a. (f a -> a) -> a
fromMoo = \case
Empty z -> \alg -> alg (fmap absurd z)
Layered u -> \alg -> alg (fmap (\moo -> fromMoo moo alg) u)
Note that, by doing so, we have sneakily moved from applying alg under one f layer to recursively applying alg under an arbitrary number of f layers.
Next, we can note an f Void value can be injected into the Layered constructor:
emptyLayered :: Functor f => f Void -> Moo f
emptyLayered z = Layered (fmap absurd z)
That means we don't actually need the Empty constructor:
newtype Moo f = Moo (f (Moo f))
unMoo :: Moo f -> f (Moo f)
unMoo (Moo u) = u
What about the Empty case in fromMoo? The only difference between the two cases is that, in the Empty case, we have absurd instead of \moo -> fromMoo moo alg. Since all Void -> a functions are absurd, we don't need a separate Empty case there either:
fromMoo :: Functor f => Moo f -> forall a. (f a -> a) -> a
fromMoo (Moo u) = \alg -> alg (fmap (\moo -> fromMoo moo alg) u)
A possible cosmetic tweak is flipping the fromMoo arguments, so that we don't need to write the argument to fmap as a lambda:
foldMoo :: Functor f => (f a -> a) -> Moo f -> a
foldMoo alg (Moo u) = alg (fmap (foldMoo alg) u)
Or, more pointfree:
foldMoo :: Functor f => (f a -> a) -> Moo f -> a
foldMoo alg = alg . fmap (foldMoo alg) . unMoo
At this point, a second look at our definitions suggests some renaming is in order:
newtype Fix f = Fix (f (Fix f))
unfix :: Fix f -> f (Fix f)
unfix (Fix u) = u
cata :: Functor f => (f a -> a) -> Fix f -> a
cata alg = alg . fmap (cata alg) . unfix
fromFix :: Functor f => Fix f -> forall a. (f a -> a) -> a
fromFix t = \alg -> cata alg t
And there it is: all forall a. (f a -> a) -> a functions have the form \alg -> cata alg t for some t :: Fix f. Therefore, fixToMu is surjective, and we have the desired isomorphism.
Addendum
In the comments, a germane question was raised about the applicability of the induction argument in the cata Fix t = t derivation. At a minimum, the functor laws and parametricity ensure that fmap (cata Fix) won't create extra work (for instance, it won't enlarge the structure, or introduce additional recursive positions to dig into), which justifies why stepping into the recursive positions is all that matters in the inductive step of the derivation. That being so, if t is a finite structure, the base case of an empty f (Fix t) will eventually be reached, and all is clear. If we allow t to be infinite, however, we can keep descending endlessly, fmap after fmap after fmap, without ever reaching the base case.
The situation with infinite structures, though, is not as awful as it might seem at first. Laziness, which is what makes infinite structures viable in the first place, allows us to consume infinite structures lazily:
GHCi> :info ListF
data ListF a b = Nil | Cons a b
-- etc.
GHCi> ones = Fix (Cons 1 ones)
GHCi> (\(Fix (Cons a _)) -> a) (cata Fix ones)
1
GHCi> (\(Fix (Cons _ (Fix (Cons a _)))) -> a) (cata Fix ones)
1
While the succession of recursive positions extends infinitely, we can stop at any point and get useful results out of the surrounding ListF functorial contexts. Such contexts, it bears repeating, are unaffected by fmap, and so any finite segment of the structure we might consume will be unaffected by cata Fix.
This laziness reprieve reflects how, as mentioned elsewhere in this discussion, laziness collapses the distinction between the fixed points Mu, Fix and Nu. Without laziness, Fix is not enough to encode productive corecursion, and so we have to switch to Nu, the greatest fixed point. Here is a tiny demonstration of the difference:
GHCi> :set -XBangPatterns
GHCi> -- Like ListF, but strict in the recursive position.
GHCi> data SListF a b = SNil | SCons a !b deriving Functor
GHCi> ones = Nu (\() -> SCons 1 ()) ()
GHCi> (\(Nu c a) -> (\(SCons a _) -> a) (c a)) ones
1
GHCi> ones' = Fix (SCons 1 ones')
GHCi> (\(Fix (SCons a _)) -> a) ones'
^CInterrupted.
A recent proposal on the Haskell libraries mailing list led me to consider the following:
ft :: (Applicative f, Monoid m, Traversable t)
-> (b -> m) -> (a -> f b) -> t a -> f m
ft f g xs = foldMap f <$> traverse g xs
I noticed that the Traversable constraint can be weakened to Foldable:
import Data.Monoid (Ap (..)) -- Requires a recent base version
ft :: (Applicative f, Monoid m, Foldable t)
-> (b -> m) -> (a -> f b) -> t a -> f m
ft f g = getAp . foldMap (Ap . fmap f . g)
In the original proposal, f was supposed to be id, leading to
foldMapA
:: (Applicative f, Monoid m, Foldable t)
-> (a -> f m) -> t a -> f m
--foldMapA g = getAp . foldMap (Ap . fmap id . g)
foldMapA g = getAp . foldMap (Ap . g)
which is strictly better than the traverse-then-fold approach.
But in the more general ft, there's a potential problem: fmap could be expensive in the f functor, in which case the fused version could potentially be more expensive than the original!
The usual tools for dealing with expensive fmap are Yoneda and Coyoneda. Since we need to lift many times and only lower once, Coyoneda is the one that can help us:
import Data.Functor.Coyoneda
ft' :: (Applicative f, Monoid m, Foldable t)
=> (b -> m) -> (a -> f b) -> t a -> f m
ft' f g = lowerCoyoneda . getAp
. foldMap (Ap . fmap f . liftCoyoneda . g)
So now we replace all those expensive fmaps with one (buried in lowerCoyoneda). Problem solved? Not quite.
The trouble with Coyoneda is that its liftA2 is strict. So if we write something like
import Data.Monoid (First (..))
ft' (First . Just) Identity $ 1 : undefined
-- or, importing Data.Functor.Reverse,
ft' (Last . Just) Identity (Reverse $ 1 : undefined)
then it will fail, whereas ft has no trouble with those. Is there a way to have our cake and eat it too? That is, a version that uses only a Foldable constraint, only fmaps O(1) times more than traverse in the f functor, and is just as lazy as ft?
Note: we could make liftA2 for Coyoneda somewhat lazier:
liftA2 f m n = liftCoyoneda $
case (m, n) of
(Coyoneda g x, Coyoneda h y) -> liftA2 (\p q -> f (g p) (h q)) x y
This is enough to let it produce an answer to ft' (First . Just) Identity $ 1 : 2 : undefined, but not to ft' (First . Just) Identity $ 1 : undefined. I don't see any obvious way to make it lazier than that, because pattern matches on existentials must always be strict.
I don't believe it's possible. Avoiding fmaps at the elements seems to require some knowledge of the structure of the container. For example, the Traversable instance for lists can be written
traverse f (x : xs) = liftA2 (:) (f x) (traverse f xs)
We know that the first argument of (:) is a single element, so we can use liftA2 to combine the process of mapping over the action for that element with the process of combining the result of that action with the result associated with the rest of the list.
In a more generic context, the structure of a fold can be captured faithfully using a magma type with a bogus Monoid instance:
data Magma a = Bin (Magma a) (Magma a) | Leaf a | Nil
deriving (Functor, Foldable, Traversable)
instance Semigroup (Magma a) where
(<>) = Bin
instance Monoid (Magma a) where
mempty = Nil
toMagma :: Foldable t => t a -> Magma a
toMagma = foldMap Leaf
We can write
ft'' :: (Applicative f, Monoid m, Foldable t)
=> (b -> m) -> (a -> f b) -> t a -> f m
ft'' f g = fmap (lowerMagma f) . traverse g . toMagma
lowerMagma :: Monoid m => (a -> m) -> Magma a -> m
lowerMagma f (Bin x y) = lowerMagma f x <> lowerMagma f y
lowerMagma f (Leaf x) = f x
lowerMagma _ Nil = mempty
But there's trouble in the Traversable instance:
traverse f (Leaf x) = Leaf <$> f x
That's exactly the sort of trouble we were trying to avoid. And there's no lazy fix for it. If we encounter Bin l r, we can't lazily determine whether l or r are leaves. So we're stuck. If we allowed a Traversable constraint on ft'', we could capture the result of traversing with a richer sort of magma type (such as one used in lens), which I suspect could let us do something more clever though I haven't found anything yet.
I have an AST that I'm annotating using Cofree:
data ExprF a
= Const Int
| Add a
a
| Mul a
a
deriving (Show, Eq, Functor)
I use type Expr = Fix ExprF to represent untagged ASTs, and type AnnExpr a = Cofree ExprF a to represent tagged ones. I've figured out a function to transform tagged ASTs into untagged ones by throwing away all the annotations:
forget :: Functor f => Cofree f a -> Fix f
forget = Fix . fmap uncofree . unwrap
This looks like it might be some sort of catamorphism (I'm using the definition from Kmett's recursion-schemes package).
cata :: (Base t a -> a) -> t -> a
cata f = c where c = f . fmap c . project
I'd think the above rewritten using a catamorphism would look something like this, but I can't figure out what to put for alg to make it typecheck.
forget :: Functor f => Cofree f a -> Fix f
forget = cata alg where
alg = ???
Any help figuring out if this really is a cata/anamorphism, and some intuition for why it is/isn't would be greatly appreciated.
forget :: Functor f => Cofree f a -> Fix f
forget = cata (\(_ :< z) -> Fix z)
-- (Control.Comonad.Trans.Cofree.:<)
-- not to be confused with
-- (Control.Comonad.Cofree.:<)
Explanation
Looking only at the types, we can show that there is really only one way to implement forget. Let's start with the type of cata:
cata :: Recursive t => (Base t b -> b) -> t -> b
Here t ~ Cofree f a and the type instance of Base for Cofree gives:
type instance Base (Cofree f a) = CofreeF f a
Where CofreeF is:
data CoFreeF f a b = a :< f b
-- N.B.: CoFree also defines a (:<) constructor so you have to be
-- careful with imports.
i.e., a fancy pair type. Let's replace it with an actual pair type to make things clearer:
cata :: Functor f => ((a, f b) -> b) -> Cofree f a -> b
Now we're really specializing cata with a more concrete b, namely Fix f:
-- expected type of `cata` in `forget`
cata :: Functor f => ((a, f (Fix f)) -> Fix f) -> Cofree f a -> Fix f
forget is parametric in a and f, so the function we give cata can do nothing with the a in the pair, and the only sensible way to implement the remaining f (Fix f) -> Fix f is the Fix wrapper.
Operationally, Fix is the identity, so (\(_ :< z) -> Fix z) is really (\(_ :< z) -> z) which corresponds to the intuition of removing the annotation, i.e., the first component of the pair (_ :< z).
I have this F-Algebra (introduced in a previous question), and I want to cast an effectful algebra upon it. Through desperate trial, I managed to put together a monadic catamorphism that works. I wonder if it may be generalized to an applicative, and if not, why.
This is how I defined Traversable:
instance Traversable Expr where
traverse f (Branch xs) = fmap Branch $ traverse f xs
traverse f (Leaf i ) = pure $ Leaf i
This is the monadic catamorphism:
type AlgebraM a f b = a b -> f b
cataM :: (Monad f, Traversable a) => AlgebraM a f b -> Fix a -> f b
cataM f x = f =<< (traverse (cataM f) . unFix $ x)
And this is how it works:
λ let printAndReturn x = print x >> pure x
λ cataM (printAndReturn . evalSum) $ branch [branch [leaf 1, leaf 2], leaf 3]
1
2
3
3
6
6
My idea now is that I could rewrite like this:
cataA :: (Applicative f, Traversable a) => AlgebraM a f b -> Fix a -> f b
cataA f x = do
subtree <- traverse (cataA f) . unFix $ x
value <- f subtree
return value
Unfortunately, value here depends on subtree and, according to a paper on applicative do-notation, in such case we cannot desugar to Applicative. It seems like there's no way around this; we need a monad to float up from the depths of nesting.
Is it true? Can I safely conclude that only flat structures can be folded with applicative effects alone?
Can I safely conclude that only flat structures can be folded with applicative effects alone?
You can say that again! After all, "flattening nested structures" is exactly what makes a monad a monad, rather than Applicative which can only combine adjacent structures. Compare (a version of) the signatures of the two abstractions:
class Functor f => Applicative f where
pure :: a -> f a
(<.>) :: f a -> f b -> f (a, b)
class Applicative m => Monad m where
join :: m (m a) -> m a
What Monad adds to Applicative is the ability to flatten nested ms into one m. That's why []'s join is concat. Applicative only lets you smash together heretofore-unrelated fs.
It's no coincidence that the free monad's Free constructor contains a whole f full of Free fs, whereas the free applicative's Ap constructor only contains one Ap f.
data Free f a = Return a | Free (f (Free f a))
data Ap f a where
Pure :: a -> Ap f a
Cons :: f a -> Ap f b -> Ap f (a, b)
Hopefully that gives you some intuition as to why you should expect that it's not possible to fold a tree using an Applicative.
Let's play a little type tennis to see how it shakes out. We want to write
cataA :: (Traversable f, Applicative m) => (f a -> m a) -> Fix f -> m a
cataA f (Fix xs) = _
We have xs :: f (Fix f) and a Traversable for f. My first instinct here is to traverse the f to fold the contained subtrees:
cataA f (Fix xs) = _ $ traverse (cataA f) xs
The hole now has a goal type of m (f a) -> m a. Since there's an f :: f a -> m a knocking about, let's try going under the m to convert the contained fs:
cataA f (Fix xs) = _ $ fmap f $ traverse (cataA f) xs
Now we have a goal type of m (m a) -> m a, which is join. So you do need a Monad after all.
According to several sources, the Haskell implementation for composing functors is more or less the following:
import Data.Functor.Compose
newtype Compose f g a = Compose { getCompose :: f (g a) }
instance (Functor f, Functor g) => Functor (Compose f g) where
fmap f (Compose x) = Compose (fmap (fmap f) x)
My question is: what is the type of x in the last definition?
I'd say it is f g a, but even there I struggle to 'see' the computation fmap (fmap f) x
Could anyone be as kind as to provide a clear and complete working example of this point? What about fmapping a Tree of Maybe's paying attention to both Empty's and Node's?
Thank you in advance.
what is the type of x in the last definition?
Before saying anything else about the matter: you can ask GHC! GHC 7.8 and above supports TypedHoles, meaning that if you place an underscore in a expression (not pattern), and hit load or compile, you get a message with the expected type of the underscore and the types of the variables in local scope.
newtype Compose f g a = Compose { getCompose :: f (g a) }
instance (Functor f, Functor g) => Functor (Compose f g) where
fmap f (Compose x) = _
GHC says now, with some parts omitted:
Notes.hs:6:26: Found hole ‘_’ with type: Compose f g b …
-- omitted type variable bindings --
Relevant bindings include
x :: f (g a)
(bound at /home/kutta/home/Dropbox/src/haskell/Notes.hs:6:21)
f :: a -> b
(bound at /home/kutta/home/Dropbox/src/haskell/Notes.hs:6:10)
fmap :: (a -> b) -> Compose f g a -> Compose f g b
(bound at /home/kutta/home/Dropbox/src/haskell/Notes.hs:6:5)
In the expression: _
In an equation for ‘fmap’: fmap f (Compose x) = _
In the instance declaration for ‘Functor (Compose f g)’
There you go, x :: f (g a). And after some practice, TypedHoles can help you tremendously in figuring out complex polymorphic code. Let's try to figure out our current code out by writing out the right hand side from scratch.
We've already seen that the hole had type Compose f g b. Therefore we must have a Compose _ on the right side:
instance (Functor f, Functor g) => Functor (Compose f g) where
fmap f (Compose x) = Compose _
The new hole has type f (g b). Now we should look at the context:
Relevant bindings include
x :: f (g a)
(bound at /home/kutta/home/Dropbox/src/haskell/Notes.hs:6:21)
f :: a -> b
(bound at /home/kutta/home/Dropbox/src/haskell/Notes.hs:6:10)
fmap :: (a -> b) -> Compose f g a -> Compose f g b
(bound at /home/kutta/home/Dropbox/src/haskell/Notes.hs:6:5)
The goal is to get an f (g b) out of the ingredients in the context. The fmap in the listing above unfortunately refers to the function being defined, which is sometimes helpful, but not here. We're better off knowing that f and g are both functors, therefore they can be fmap-ed over. Since we have x :: f (g a), we guess that we should fmap something over x to get an f (g b):
fmap f (Compose x) = Compose (fmap _ x)
Now the hole becomes g a -> g b. But g a -> g b is very easy now, since we have f :: a -> b and g is a Functor, so we also have fmap :: (a -> b) -> g a -> g b, and therefore fmap f :: g a -> g b.
fmap f (Compose x) = Compose (fmap (fmap f) x)
And we're done.
To wrap up the technique:
Start with putting a hole in the place where you don't know how to proceed. Here we started with putting the hole in place of the entire right hand side, but often you will have a good idea about most parts of an implementation and you'll need the hole in a specific problematic subexpression.
By looking at the types, try to narrow down which implementations could possibly lead to the goal and which could not. Fill in a new expression and reposition the hole. In proof assistant jargon this is called "refining".
Repeat step 2 until you either have the goal, in which case you're done, or the current goal seems impossible, in which case backtrack until the last non-obvious choice you made, and try an alternative refining.
The above technique is sometimes facetiously called "type tetris". A possible drawback is that you can implement complex code just by playing the "tetris", without actually understanding what you're doing. And sometimes after going too far, you get seriously stuck in the game, and then you have to start actually thinking about the problem. But ultimately it lets you understand code that would be otherwise very difficult to grasp.
I personally use TypedHoles all the time and basically as a reflex. I've come to rely so much on it so that on a occasion when I had to move back to GHC 7.6 I felt rather uncomfortable (but fortunately you can emulate holes even there).
The type of x is f (g a). For example, x might be a list of trees of integers: [Tree Int] (which can also be written as [] (Tree Int) so that it matches f (g x) more closely).
As an example, consider function succ :: Int -> Int which adds one to an integer. Then, function fmap succ :: Tree Int -> Tree Int will increment every integer in the tree. Further, fmap (fmap succ) :: [Tree Int] -> [Tree Int] will apply the previous fmap succ to all the trees in a list, hence it will increment every integer in the list of trees.
If instead you have Tree (Maybe Int), then fmap (fmap succ) will increment every integer in such tree. Values in the tree of the form Nothing will not be affected, while values Just x will have x incremented.
Example: (GHCi session)
> :set -XDeriveFunctor
> data Tree a = Node a (Tree a) (Tree a) | Empty deriving (Show, Functor)
> let x1 = [Node 1 Empty Empty]
> fmap (fmap succ) x1
[Node 2 Empty Empty]
> let x2 = [Node 1 Empty Empty, Node 2 (Node 3 Empty Empty) Empty]
> fmap (fmap succ) x2
[Node 2 Empty Empty,Node 3 (Node 4 Empty Empty) Empty]
> let x3 = Just (Node 1 Empty Empty)
> fmap (fmap succ) x3
Just (Node 2 Empty Empty)