Control.Monad.Free implements a free monad as:
data Free f a = Pure a | Free (f (Free f a))
instance Functor f => Functor (Free f) where
fmap f = go where
go (Pure a) = Pure (f a)
go (Free fa) = Free (go <$> fa)
I am having a lot of trouble understanding the second go line, especially in the context of descriptions of what a free monad is. Can somenoe please describe how this works and why it makes Free f a a free monad?
At this point, you're just making Free a functor rather than a monad. Of course, to be a monad, it has to be a functor as well!
I think it would be a little easier to think about if we rename the Free constructor to avoid confusion:
data Free f a = Pure a | Wrap (f (Free f a))
Now let's look at the structure of what we're building up. For the Pure case, we just have a value of type a. For the Wrap case, we have another Free f a value wrapped in the f functor.
Let's ignore the constructors for a second. That is, if we have Wrap (f (Pure a)) let's think of it as f a. This means that the structure we're building up is just f--a functor--applied repeatedly some number of times. Values of this type will look something like: f (f (f (f (f a)))). To make it more concrete, let f be [] to get: [[[[[a]]]]]. We can have as many levels of this as we want by using the Wrap constructor repeatedly; everything ends when we use Pure.
Putting the constructors back in, [[a]] would look like: Wrap [Wrap [Pure a]].
So all we're doing is taking the Pure value and repeatedly applying a functor to it.
Given this structure of a repeatedly applied functor, how would we map a function over it? For the Pure case--before we've wrapped it in f--this is pretty trivial: we just apply the function. But if we've already wrapped our value in f at least once, we have to map over the outer level and then recursively map over all the inner layers. Put another way, we have to map mapping over the Free monad over the functor f.
This is exactly what the second case of go is doing. go itself is just fmap for Free f a. <$> is fmap for f. So what we do is fmap go over f, which makes the whole thing recursive.
Since this mapping function is recursive, it can deal with an arbitrary number of levels. So we can map a function over [[a]] or [[[[a]]]] or whatever. This is why we need to fmap go when go is fmap itself--the important difference being that the first fmap works for a single layer of f and go recursively works for the whole Free f a construction.
I hope this cleared things up a bit.
To tell you the truth, I usually just find it easier not to read the code in these simpler functions, but rather to read the types and then write the function myself. Think of it as a puzzle. You're trying to construct this:
mapFree :: Functor f => (a -> b) -> Free f a -> Free f b
So how do we do it? Well, let's take the Pure constructor first:
mapFree f (Pure a) = ...
-- I like to write comments like these while using Haskell, then usually delete
-- them by the end:
--
-- f :: a -> b
-- a :: a
With the two type comments in there, and knowing the type of Pure, you should see the solution right away:
mapFree f (Pure a) = Pure (f a)
Now the second case:
mapFree f (Free fa) = ...
-- f :: a -> b
-- fa :: Functor f => f (Free f a)
Well, since f is a Functor, we can actually use mapFree to apply mapFree f to the inner component of f (Free f a). So we get:
mapFree f (Free fa) = Free (fmap (mapFree f) fa)
Now, using this definition as the Functor f => Functor (Free f) instance, we get:
instance Functor f => Functor (Free f) where
fmap f (Pure a) = Pure (f a)
fmap f (Free fa) = Free (fmap (fmap f) fa)
With a bit of work, you can verify that the definition we just arrived at here is the same thing as the one you're puzzling over. (As others have mentioned, (<$>) (defined in Control.Applicative) is just a synonym for fmap.) You may still not understand it, but you managed to write it, which for types as abstract as these is very often good enough.
As for understanding it, though, the thing that helps me is the following: think of a Free monad as a sort of list-like structure, with Pure as [] and Free as (:). From the definition of the type you should see this: Pure is the base case, and Free is the recursive case. What the fmap instance is doing is "pushing" the mapped function to the bottom of this structure, to where the Pure lives.
Since I am confused myself, I answer with a question...could this be a correct substitution (relying on Tikhon's Wrap clarification)?
...
fmap g = go where
go (Pure a) = Pure (g a)
go (Wrap fa) = Wrap (go <$> fa)
Substituting "fmap g" for "go", and "fmap" for "<$>" (since "<$>" is infix,
we flip "go" and "<$>"):
fmap g (Pure a) = Pure (g a)
fmap g (Wrap fa) = Wrap (fmap (fmap g) fa)
Substituting "f (Free f a)" for "fa" in the last line (from the first data
declaration):
fmap g (Wrap fa) = Wrap ( fmap (fmap g) (f (Free f a)) )
= Wrap ( f ( fmap g (Free f a) ) )
= wrap ( f (Pure (g a) ) ) --if Free f a is Pure
or
= Wrap ( f ( fmap g (Wrap fa') ) ) --if Free f a is Wrap
The last line includes the recursion "fmap g (Wrap fa')", which would continue
unless Pure is encountered.
Related
While I was learning Composing Types chapter from Haskell Book, I was given tasks to write Functor and Applicative instances for the following type.
newtype Compose f g a = Compose { getCompose :: f (g a) }
I wrote the following definitions
Functor:
fmap f (Compose fga) = Compose $ (fmap . fmap) f fga
Applicative:
(Compose f) <*> (Compose a) = Compose $ (<*>) <$> f <*> a
I learned that composing two Functors or Applicatives gives Functor and Applicative respectively.
The author also explained it is not possible to compose two Monads the same way. So we use Monad Transformers. I just do not want to read Monad Transformers unless I'm clear with why Monads do not compose.
So far I tried to write bind function like this:
Monad:
(>>=) :: Compose f g a -> (a -> Compose f g b) -> Compose f g b
(Compose fga) >>= h = (fmap.fmap) h fga
and of course got this error from GHC
Expected type: Compose f g b
Actual type: f (g (Compose f g b))
If I can strip the outermost f g somehow, the composition gives us a monad right? (I still couldn't figure out how to strip that though)
I tried reading answers from other Stack Overflow questions like this, but all answers are more theoretical or some Math. I still haven't learned why Monads do not compose. Can somebody explain me without using Math?
I think this is easiest to understand by looking at the join operator:
join :: Monad m => m (m a) -> m a
join is an alternative to >>= for defining a Monad, and is a little easier to reason about. (But now you have an exercise to do: show how to implement >>= from join, and how to implement join from >>=!)
Let's try to make a join operation for Composed f g and see what goes wrong. Our input is essentially a value of type f (g (f (g a))), and we want to produce a value of type f (g a). We also know that we have join for f and g individually, so if we could get a value of type f (f (g (g a))), then we could hit it with fmap join . join to get the f (g a) we wanted.
Now, f (f (g (g a))) isn't so far from f (g (f (g a))). All we really need is a function like this: distribute :: g (f a) -> f (g a). Then we could implement join like this:
join = Compose . fmap join . join . fmap (distribute . fmap getCompose) . getCompose
Note: there are some laws that we would want distribute to satisfy, in order to make sure that the join we get here is lawful.
Ok, so that shows how we can compose two monads if we have a distributive law distribute :: (Monad f, Monad g) => g (f a) -> f (g a). Now, it could be true that every pair of monads has a distributive law. Maybe we just have to think really hard about how to write one down?
Unfortunately there are pairs of monads that don't have a distributive law. So we can answer your original question by producing two monads that definitely don't have a way of turning a g (f a) into an f (g a). These two monads will witness to the fact that monads don't compose in general.
I claim that g = IO and f = Maybe do not have a distributive law
-- Impossible!
distribute :: IO (Maybe a) -> Maybe (IO a)
Let's think about why such a thing should be impossible. The input to this function is an IO action that goes out into the real world and eventually produces Nothing or a Just x. The output of this function is either Nothing, or Just an IO action that, when run, eventually produces x. To produce the Maybe (IO a), we would have to peek into the future and predict what the IO (Maybe a) action is going to do!
In summary:
Monads can compose if there is a distributive law g (f a) -> f (g a). (but see the addendum below)
There are some monads that don't have such a distributive law.
Some monads can compose with each other, but not every pair of monads can compose.
Addendum: "if", but what about "only if"? If all three of F, G, and FG are monads, then you can construct a natural transformation δ : ∀X. GFX -> FGX as the composition of GFη_X : GFX -> GFGX followed by η_{GFGX} : GFGX -> FGFGX and then by μ_X : FGFGX -> FGX. In Haskellese (with explicit type applications for clarity), that would be
delta :: forall f g x. (Monad f, Monad g, Monad (Compose f g))
=> g (f x) -> f (g x)
delta = join' . pure #f . fmap #g (fmap #f (pure #g))
where
-- join for (f . g), via the `Monad (Compose f g)` instance
join' :: f (g (f (g x))) -> f (g x)
join' = getCompose . join #(Compose f g) . fmap Compose . Compose
So if the composition FG is a monad, then you can get a natural transformation with the right shape to be a distributive law. However, there are some extra constraints that fall out of making sure your distributive law satisfies the correct properties, vaguely alluded to above. As always, the n-Category Cafe has the gory details.
Indeed it does:
λ :i Applicative
class Functor f => Applicative (f :: * -> *) where
At the same time:
fmap f x = pure f <*> x
— by the laws of Applicative we can define fmap from pure & <*>.
I don't get why I should tediously define fmap every time I want an Applicative if, really, fmap can be automatically set up in terms of pure and <*>.
I gather it would be necessary if pure or <*> were somehow dependent on the definition of fmap but I fail to see why they have to.
While fmap can be derived from pure and <*>, it is generally not the most efficient approach. Compare:
fmap :: (a -> b) -> Maybe a -> Maybe b
fmap f Nothing = Nothing
fmap f (Just x) = Just (f x)
with the work done using Applicative tools:
fmap :: (a -> b) -> Maybe a -> Maybe b
-- inlining pure and <*> in: fmap f x = pure f <*> x
fmap f x = case (Just f) of
Nothing -> Nothing
Just f' -> case x of
Nothing -> Nothing
Just x' -> Just (f' x')
Pointlessly wrapping something up in a constructor just to do a pattern-match against it.
So, clearly it is useful to be able to define fmap independently of the Applicative functions. That could be done by making a single typeclass with all three functions, using a default implementation for fmap that you could override. However, there are types that make good Functor instances but not good Applicative instances, so you may need to implement just one. Thus, two typeclasses.
And since there are no types with Applicative instances but without Functor instances, you should be able to treat an Applicative as though it were a Functor, if you like; hence the extension relationship between the two.
However, if you tire of implementing Functor, you can (in most cases) ask GHC to derive the only possible implementation of Functor for you, with
{-# LANGUAGE DeriveFunctor #-}
data Boring a = Boring a deriving Functor
While there are proposals to make it's easier https://ghc.haskell.org/trac/ghc/wiki/IntrinsicSuperclasses the "default instances" problem itself is very difficult.
One challenge is how to deal with common superclasses:
fmap f x = pure f <*> x -- using Applicative
fmap f x = runIdentity (traverse (Identity . f) x) -- using Traversable
fmap f x = x >>= (return . f) -- using Monad
Which one to pick?
So the best we can do now is to provide fmapDefault (as Data.Traversable) does; or use pure f <*> x; or fmapRep from Data.Functor.Rep when applicable.
I have some questions concerning the function hoistfree from the Haskell library Control.Monad.Free. Given a transformation f between two functors, hoistfree f produces a morphism between the corresponding free monads. Here is its definition.
hoistFree :: Functor g => (forall a. f a -> g a) -> Free f b -> Free g b
hoistFree _ (Pure a) = Pure a
hoistFree f (Free as) = Free (hoistFree f <$> f as)
Question 1 How does Haskell know that <$> is the map associated to g and not to f, Free f or Free g?
Question 2 Why hoistfree has not been defined as
hoistFree :: Functor g => (forall a. f a -> g a) -> Free f b -> Free g b
hoistFree _ (Pure a) = Pure a
hoistFree f (Free as) = Free (f (hoistFree f <$> as))
?
If f is a natural transformation, these two definitions coincide. The second definition however always satisfies the relation
hoistfree f = iter (wrap . f) . map return
which looks pretty natural. Furthermore, there are a few basic functions that can be expressed using iter_map f g = iter f . map g. For example,
(=<<) f = iter_map wrap f
Question 3 Is iter_map defined somewhere? It looks like a monadic mapreduce. I didn't see it in the base library. Is there some gain in fusioning iter and map? In a few other languages, this is the case, but I am not sure for Haskell.
Question 1
Because of type inference, which chooses <$> from g. Indeed, in
Free (hoistFree f <$> f as)
f as has type g <something>, hence the <$> is the one given by Functor g.
Question 2
I think that, in Haskell, f is always a natural transformation. Any polymorphic function f a -> g a must be natural in a, by parametricity / free theorem.
Both definitions being equivalent, I'm not sure if any one is the "best". Maybe yours is. Or maybe the original one has better performance in practice. It looks a bit as the foldr vs foldl' argument on associative operators, where there's no clear winner.
Question 3 No idea.
traverse :: Applicative f => (a -> f b) -> t a -> f (t b)
Hi,
There are a lot of functions that I can't understand signature. Of course I understan that traverse get two arguments, that first is function. However,
what does mean (a -> f b) ? I can understand (a -> b).
Similary, t a, f (t b)
Could you explain it me ?
traverse is a type class-ed function so sadly the behaviour of this function depends on what exactly we choose t to be. This is not dis-similar to >>= or fmap. However there are rules for it's behaviour, just like in those cases. The rules are supposed to capture the idea that traverse takes a function a -> f b, which is an effectful transformation from a to b and lifts it to work on a whole "container" of as, collecting the effects of each of the local transformations.
For example, if we have Maybe a the implementation of traverse would be
traverse f (Just a) = Just <$> f a
traverse f Nothing = pure Nothing
For lists
traverse f [a1, a2, ...] = (:) <$> f a1 <*> ((:) <$> f a2 <*> ...))
Notice how we're taking advantage of the fact that the "effect" f is not only a functor, but applicative so we can take two f-ful computations, f a and f b and smash them together to get f (a, b). Now we want to come up with a few laws explaining that all traverse can do is apply f to the elements and build the original t a back up while collecting the effects on the outside. We say that
traverse Identity = Identity -- We don't lose elements
t . traverse f = traverse (t . f) -- For nicely composing t
traverse (Compose . fmap g . f) = Compose . fmap (traverse g) . traverse f
Now this looks quite complicated but all it's doing is clarifying the meaning of "Basically walks around and applies the local transformation". All this boils down to is that while you cannot just read the signature to understand what traverse does, an OK intuition for the signature is
We get a local, effectful function f :: a -> f b
A functor full of as
We get back a functor full of b gotten by repeatedly applying f, ala fmap
All the effects of f are accumulated so we get f (t b), not just t b.
Remember though, traverse can get used in some weird ways. For example, the lens package is chock-full of using traverse with very strange functors to great effect.
As a quick test, can you figure out how to use a legal traverse to implement fmap for t? That is
fmapOverkill :: Traversable f => (a -> b) -> (f a -> f b)
Or headMay
headMay :: Traversable t => t a -> Maybe a
Both of these are results of the fact that traversable instances also satisfy Functor and Foldable!
I was reading http://www.haskellforall.com/2013/06/from-zero-to-cooperative-threads-in-33.html where an abstract syntax tree is derived as the free monad of a functor representing a set of instructions. I noticed that the free monad Free is not much different from the fixpoint operator on functors Fix.
The article uses the monad operations and do syntax to build those ASTs (fixpoints) in a concise way. I'm wondering if that's the only benefit from the free monad instance? Are there any other interesting applications that it enables?
(N.B. this combines a bit from both mine and #Gabriel's comments above.)
It's possible for every inhabitant of the Fixed point of a Functor to be infinite, i.e. let x = (Fix (Id x)) in x === (Fix (Id (Fix (Id ...)))) is the only inhabitant of Fix Identity. Free differs immediately from Fix in that it ensures there is at least one finite inhabitant of Free f. In fact, if Fix f has any infinite inhabitants then Free f has infinitely many finite inhabitants.
Another immediate side-effect of this unboundedness is that Functor f => Fix f isn't a Functor anymore. We'd need to implement fmap :: Functor f => (a -> b) -> (f a -> f b), but Fix has "filled all the holes" in f a that used to contain the a, so we no longer have any as to apply our fmap'd function to.
This is important for creating Monads because we'd like to implement return :: a -> Free f a and have, say, this law hold fmap f . return = return . f, but it doesn't even make sense in a Functor f => Fix f.
So how does Free "fix" these Fixed point foibles? It "augments" our base functor with the Pure constructor. Thus, for all Functor f, Pure :: a -> Free f a. This is our guaranteed-to-be-finite inhabitant of the type. It also immediately gives us a well-behaved definition of return.
return = Pure
So you might think of this addition as taking out potentially infinite "tree" of nested Functors created by Fix and mixing in some number of "living" buds, represented by Pure. We create new buds using return which might be interpreted as a promise to "return" to that bud later and add more computation. In fact, that's exactly what flip (>>=) :: (a -> Free f b) -> (Free f a -> Free f b) does. Given a "continuation" function f :: a -> Free f b which can be applied to types a, we recurse down our tree returning to each Pure a and replacing it with the continuation computed as f a. This lets us "grow" our tree.
Now, Free is clearly more general than Fix. To drive this home, it's possible to see any type Functor f => Fix f as a subtype of the corresponding Free f a! Simply choose a ~ Void where we have data Void = Void Void (i.e., a type that cannot be constructed, is the empty type, has no instances).
To make it more clear, we can break our Fix'd Functors with break :: Fix f -> Free f a and then try to invert it with affix :: Free f Void -> Fix f.
break (Fix f) = Free (fmap break f)
affix (Free f) = Fix (fmap affix f)
Note first that affix does not need to handle the Pure x case because in this case x :: Void and thus cannot really be there, so Pure x is absurd and we'll just ignore it.
Also note that break's return type is a little subtle since the a type only appears in the return type, Free f a, such that it's completely inaccessible to any user of break. "Completely inaccessible" and "cannot be instantiated" give us the first hint that, despite the types, affix and break are inverses, but we can just prove it.
(break . affix) (Free f)
=== [definition of affix]
break (Fix (fmap affix f))
=== [definition of break]
Free (fmap break (fmap affix f))
=== [definition of (.)]
Free ( (fmap break . fmap affix) f )
=== [functor coherence laws]
Free (fmap (break . affix) f)
which should show (co-inductively, or just intuitively, perhaps) that (break . affix) is an identity. The other direction goes through in a completely identical fashion.
So, hopefully this shows that Free f is larger than Fix f for all Functor f.
So why ever use Fix? Well, sometimes you only want the properties of Free f Void due to some side effect of layering fs. In this case, calling it Fix f makes it more clear that we shouldn't try to (>>=) or fmap over the type. Furthermore, since Fix is just a newtype it might be easier for the compiler to "compile away" layers of Fix since it only plays a semantic role anyway.
Note: we can more formally talk about how Void and forall a. a are isomorphic types in order to see more clearly how the types of affix and break are harmonious. For instance, we have absurd :: Void -> a as absurd (Void v) = absurd v and unabsurd :: (forall a. a) -> Void as unabsurd a = a. But these get a little silly.
There is a deep and 'simple' connection.
It's a consequence of adjoint functor theorem, left adjoints preserve initial objects: L 0 ≅ 0.
Categorically, Free f is a functor from a category to its F-algebras (Free f is left adjoint to a forgetful functor going the other way 'round). Working in Hask our initial algebra is Void
Free f Void ≅ 0
and the initial algebra in the category of F-algebras is Fix f: Free f Void ≅ Fix f
import Data.Void
import Control.Monad.Free
free2fix :: Functor f => Free f Void -> Fix f
free2fix (Pure void) = absurd void
free2fix (Free body) = Fix (free2fix <$> body)
fixToFree :: Functor f => Fix f -> Free f Void
fixToFree (Fix body) = Free (fixToFree <$> body)
Similarly, right adjoints (Cofree f, a functor from Hask to the category of F-coalgebras) preserves final objects: R 1 ≅ 1.
In Hask this is unit: () and the final object of F-coalgebras is also Fix f (they coincide in Hask) so we get: Cofree f () ≅ Fix f
import Control.Comonad.Cofree
cofree2fix :: Functor f => Cofree f () -> Fix f
cofree2fix (() :< body) = Fix (cofree2fix <$> body)
fixToCofree :: Functor f => Fix f -> Cofree f ()
fixToCofree (Fix body) = () :< (fixToCofree <$> body)
Just look how similar the definitions are!
newtype Fix f
= Fix (f (Fix f))
Fix f is Free f with no variables.
data Free f a
= Pure a
| Free (f (Free f a))
Fix f is Cofree f with dummy values.
data Cofree f a
= a <: f (Cofree f a)