Definition of hoistfree - haskell

I have some questions concerning the function hoistfree from the Haskell library Control.Monad.Free. Given a transformation f between two functors, hoistfree f produces a morphism between the corresponding free monads. Here is its definition.
hoistFree :: Functor g => (forall a. f a -> g a) -> Free f b -> Free g b
hoistFree _ (Pure a) = Pure a
hoistFree f (Free as) = Free (hoistFree f <$> f as)
Question 1 How does Haskell know that <$> is the map associated to g and not to f, Free f or Free g?
Question 2 Why hoistfree has not been defined as
hoistFree :: Functor g => (forall a. f a -> g a) -> Free f b -> Free g b
hoistFree _ (Pure a) = Pure a
hoistFree f (Free as) = Free (f (hoistFree f <$> as))
?
If f is a natural transformation, these two definitions coincide. The second definition however always satisfies the relation
hoistfree f = iter (wrap . f) . map return
which looks pretty natural. Furthermore, there are a few basic functions that can be expressed using iter_map f g = iter f . map g. For example,
(=<<) f = iter_map wrap f
Question 3 Is iter_map defined somewhere? It looks like a monadic mapreduce. I didn't see it in the base library. Is there some gain in fusioning iter and map? In a few other languages, this is the case, but I am not sure for Haskell.

Question 1
Because of type inference, which chooses <$> from g. Indeed, in
Free (hoistFree f <$> f as)
f as has type g <something>, hence the <$> is the one given by Functor g.
Question 2
I think that, in Haskell, f is always a natural transformation. Any polymorphic function f a -> g a must be natural in a, by parametricity / free theorem.
Both definitions being equivalent, I'm not sure if any one is the "best". Maybe yours is. Or maybe the original one has better performance in practice. It looks a bit as the foldr vs foldl' argument on associative operators, where there's no clear winner.
Question 3 No idea.

Related

How do the requirements for the instances of the Applicative type class relate to their implementations for Functor [duplicate]

This question already has answers here:
Haskell: Flaw in the description of applicative functor laws in the hackage Control.Applicative article?: it says Applicative determines Functor
(2 answers)
Closed 3 years ago.
According to Haskell's library documentation, every instance of the Applicative class must satisfy the four rules:
identity: pure id <*> v = v
composition: pure (.) <*> u <*> v <*> w = u <*> (v <*> w)
homomorphism: pure f <*> pure x = pure (f x)
interchange: u <*> pure y = pure ($ y) <*> u
It then says that as a consequence of these rules, the underlying Functor instance will satisfy fmap f x = pure f <*> x. But since the method fmap does not even appear in the above equations, how exactly does this property follow from them?
Update: I've substantially expanded the answer. I hope it helps.
"Short" answer:
For any functor F, there is a "free theorem" (see below) for the type:
(a -> b) -> F a -> F b
This theorem states that for any (total) function, say foo, with this type, the following will be true for any functions f, f', g, and h, with appropriate matching types:
If f' . g = h . f, then foo f' . fmap g = fmap h . foo f.
Note that it is not at all obvious why this should be true.
Anyway, if you set f = id and g = id and use the functor law fmap id = id, this theorem simplifies to:
For all h, we have foo h = fmap h . foo id.
Now, if F is also an applicative, then the function:
foo :: (a -> b) -> F a -> F b
foo f x = pure f <*> x
has the right type, so it satisfies the theorem. Therefore, for all h, we have:
pure h <*> x
-- by definition of foo
= foo h x
-- by the specialized version of the theorem
= (fmap h . foo id) x
-- by definition of the operator (.)
= fmap h (foo id x)
-- by the definition of foo
= fmap h (pure id <*> x)
-- by the identity law for applicatives
= fmap h x
In other words, the identity law for applicatives implies the relation:
pure h <*> x = fmap h x
It is unfortunate that the documentation does not include some explanation or at least acknowledgement of this extremely non-obvious fact.
Longer answer:
Originally, the documentation listed the four laws (identity, composition, homomorphism, and interchange), plus two additional laws for *> and <* and then simply stated:
The Functor instance should satisfy
fmap f x = pure f <*> x
The wording above was replaced with the new text:
As a consequence of these laws, the Functor instance for f will satisfy
fmap f x = pure f <*> x
as part of commit 92b562403 in February 2011 in response to a suggestion made by Russell O'Connor on the libraries list.
Russell pointed out that this rule was actually implied by the other applicative laws. Originally, he offered the following proof (the link in the post is broken, but I found a copy on archive.org). He pointed out that the function:
possibleFmap :: Applicative f => (a -> b) -> f a -> f b
possibleFmap f x = pure f <*> x
satisfies the Functor laws for fmap:
pure id <*> x = x {- Identity Law -}
pure (f . g) <*> x
= {- infix to prefix -}
pure ((.) f g) <*> x
= {- 2 applications of homomorphism law -}
pure (.) <*> pure f <*> pure g <*> x
= {- composition law -}
pure f <*> (pure g <*> x)
and then reasoned that:
So, \f x -> pure f <*> x satisfies the laws of a functor.
Since there is at most one functor instance per data type,
(\f x -> pure f <*> x) = fmap.
A key part of this proof is that there is only one possible functor instance (i.e., only one way of defining fmap) per data type.
When asked about this, he gave the following proof of the uniqueness of fmap.
Suppose we have a functor f and another function
foo :: (a -> b) -> f a -> f b
Then as a consequence of the free theorem for foo, for any f :: a -> b
and any g :: b -> c.
foo (g . f) = fmap g . foo f
In particular, if foo id = id, then
foo g = foo (g . id) = fmap g . foo id = fmap g . id = fmap g
Obviously, this depends critically on the "consequence of the free theorem for foo". Later, Russell realized that the free theorem could be used directly, together with the identity law for applicatives, to prove the needed law. That's what I've summarized in my "short answer" above.
Free Theorems...
So what about this "free theorem" business?
The concept of free theorems comes from a paper by Wadler, "Theorems for Free". Here's a Stack Overflow question that links to the paper and some other resources. Understanding the theory "for real" is hard, but you can think about it intuitively. Let's pick a specific functor, like Maybe. Suppose we had a function with the following type;
foo :: (a -> b) -> Maybe a -> Maybe b
foo f x = ...
Note that, no matter how complex and convoluted the implementation of foo is, that same implementation needs to work for all types a and b. It doesn't know anything about a, so it can't do anything with values of type a, other than apply the function f, and that just gives it a b value. It doesn't know anything about b either, so it can't do anything with a b value, except maybe return Just someBvalue. Critically, this means that the structure of the computation performed by foo -- what it does with the input value x, whether and when it decides to apply f, etc. -- is entirely determined by whether x is Nothing or Just .... Think about this for a bit -- foo can inspect x to see if it's Nothing or Just someA. But, if it's Just someA, it can't learn anything about the value someA: it can't use it as-is because it doesn't understand the type a, and it can't do anything with f someA, because it doesn't understand the type b. So, if x is Just someA, the function foo can only act on its Just-ness, not on the underlying value someA.
This has a striking consequence. If we were to use a function g to change the input values out from under foo f x by writing:
foo f' (fmap g x)
because fmap g doesn't change x's Nothing-ness or Just-ness, this change as no effect on the structure of foo's computation. It behaves the same way, processing the Nothing or Just ... value in the same way, applying f' in exactly the same circumstances and at exactly the same time that it previously applied f, etc.
This means that, as long as we've arranged things so that f' acting on the g-transformed value gives the same answer as an h-transformed version of f acting on the original value -- in other words if we have:
f' . g = h . f
then we can trick foo into processing our g-transformed input in exactly the same way it would have processed the original input, as long as we account for the input change after foo has finished running by applying h to the output:
foo f' (fmap g x) = fmap h (foo f x)
I don't know whether or not that's convincing, but that's how we get the free theorem:
If f' . g = h . f then foo f' . fmap g = fmap h . foo f.
It basically says that because we can transform the input in a way that foo won't notice (because of its polymorphic type), the answer is the same whether we transform the input and run foo, or run foo first and transform its output instead.

Any function with the same polymorphic type as fmap must be equal to fmap?

I'm reading the second edition of Programming in Haskell and I've came across this sentence:
... there is only one way to make any given parameterised type into a functor, and hence any function with the same polymorphic type as fmap must be equal to fmap.
This doesn't seem right to me, though. I can see that there is only one valid definition of fmap for each Functor type, but surely I could define any number of functions with the type (a -> b) -> f a -> f b which aren't equivalent to each other?
Why is this the case? Or, is it just a mistake by the author?
You've misread what the author was saying.
...any function with the same polymorphic type as fmap...
This means, any function with the signature
Functor f => (a -> b) -> f a -> f b
must be equivalant to fmap. (Unless you permit bottom values, of course.)
That statement is true; it can be seen quite easily if you try to define such a function: because you know nothing about f except that it's a functor, the only way to obtain a non-⊥ f b value is by fmapping over the f a one.
What's a bit less clear cut is the logical implication in the quote:
there is only one way to make any given parameterised type into a functor, and hence any function with the same polymorphic type as fmap must be equal to fmap.
I think what the author means there is, because a Functor f => (a -> b) -> f a -> f b function must necessarily invoke fmap, and because fmap is always the only valid functor-mapping for a parameterised type, any Functor f => (a -> b) -> f a -> f b will indeed also in practice obey the functor laws, i.e. it will be the fmap.
I agree that the “hence” is a bit badly phrased, but in principle the quote is correct.
I think that the quote refers to this scenario. Assume we define a parameterized type:
data F a = .... -- whatever
for which we can write not only one, but two fmap implementations
fmap1 :: (a -> b) -> F a -> F b
fmap2 :: (a -> b) -> F a -> F b
satisfying the functor laws
fmap1 id = id
fmap1 (f . g) = fmap1 f . fmap1 g
fmap2 id = id
fmap2 (f . g) = fmap2 f . fmap2 g
Under these assumptions, we have that fmap1 = fmap2.
This is a theoretical consequence of the "free theorem" associated to fmap's polymorphic type (see the comment under Lemma 1).
Pragmatically, this ensures that the instance we obtain from deriving Functor is the only possible one.
It is a mistake. Here's some examples of functions with the same type as fmap for lists that are not fmap:
\f -> const []
\f -> concatMap (replicate 2 . f)
\f -> map (f . head) . chunksOf 2
\f -> map f . reverse
There are many more. In general, given a function ixf from list lengths to lists of numbers no bigger than that length (that is, valid indices into the list), we can build
maybeIt'sFmapLol :: (Int -> [Int]) -> (a -> b) -> [a] -> [b]
maybeIt'sFmapLol ixf elemf xs = [map elemf xs !! ix | ix <- ixf (length xs)]
Use suitably lazy variants of Int to handle infinite lists. A similar function schema can be cooked up for other container-like functors.

Why are monads not closed under composition?

While I was learning Composing Types chapter from Haskell Book, I was given tasks to write Functor and Applicative instances for the following type.
newtype Compose f g a = Compose { getCompose :: f (g a) }
I wrote the following definitions
Functor:
fmap f (Compose fga) = Compose $ (fmap . fmap) f fga
Applicative:
(Compose f) <*> (Compose a) = Compose $ (<*>) <$> f <*> a
I learned that composing two Functors or Applicatives gives Functor and Applicative respectively.
The author also explained it is not possible to compose two Monads the same way. So we use Monad Transformers. I just do not want to read Monad Transformers unless I'm clear with why Monads do not compose.
So far I tried to write bind function like this:
Monad:
(>>=) :: Compose f g a -> (a -> Compose f g b) -> Compose f g b
(Compose fga) >>= h = (fmap.fmap) h fga
and of course got this error from GHC
Expected type: Compose f g b
Actual type: f (g (Compose f g b))
If I can strip the outermost f g somehow, the composition gives us a monad right? (I still couldn't figure out how to strip that though)
I tried reading answers from other Stack Overflow questions like this, but all answers are more theoretical or some Math. I still haven't learned why Monads do not compose. Can somebody explain me without using Math?
I think this is easiest to understand by looking at the join operator:
join :: Monad m => m (m a) -> m a
join is an alternative to >>= for defining a Monad, and is a little easier to reason about. (But now you have an exercise to do: show how to implement >>= from join, and how to implement join from >>=!)
Let's try to make a join operation for Composed f g and see what goes wrong. Our input is essentially a value of type f (g (f (g a))), and we want to produce a value of type f (g a). We also know that we have join for f and g individually, so if we could get a value of type f (f (g (g a))), then we could hit it with fmap join . join to get the f (g a) we wanted.
Now, f (f (g (g a))) isn't so far from f (g (f (g a))). All we really need is a function like this: distribute :: g (f a) -> f (g a). Then we could implement join like this:
join = Compose . fmap join . join . fmap (distribute . fmap getCompose) . getCompose
Note: there are some laws that we would want distribute to satisfy, in order to make sure that the join we get here is lawful.
Ok, so that shows how we can compose two monads if we have a distributive law distribute :: (Monad f, Monad g) => g (f a) -> f (g a). Now, it could be true that every pair of monads has a distributive law. Maybe we just have to think really hard about how to write one down?
Unfortunately there are pairs of monads that don't have a distributive law. So we can answer your original question by producing two monads that definitely don't have a way of turning a g (f a) into an f (g a). These two monads will witness to the fact that monads don't compose in general.
I claim that g = IO and f = Maybe do not have a distributive law
-- Impossible!
distribute :: IO (Maybe a) -> Maybe (IO a)
Let's think about why such a thing should be impossible. The input to this function is an IO action that goes out into the real world and eventually produces Nothing or a Just x. The output of this function is either Nothing, or Just an IO action that, when run, eventually produces x. To produce the Maybe (IO a), we would have to peek into the future and predict what the IO (Maybe a) action is going to do!
In summary:
Monads can compose if there is a distributive law g (f a) -> f (g a). (but see the addendum below)
There are some monads that don't have such a distributive law.
Some monads can compose with each other, but not every pair of monads can compose.
Addendum: "if", but what about "only if"? If all three of F, G, and FG are monads, then you can construct a natural transformation δ : ∀X. GFX -> FGX as the composition of GFη_X : GFX -> GFGX followed by η_{GFGX} : GFGX -> FGFGX and then by μ_X : FGFGX -> FGX. In Haskellese (with explicit type applications for clarity), that would be
delta :: forall f g x. (Monad f, Monad g, Monad (Compose f g))
=> g (f x) -> f (g x)
delta = join' . pure #f . fmap #g (fmap #f (pure #g))
where
-- join for (f . g), via the `Monad (Compose f g)` instance
join' :: f (g (f (g x))) -> f (g x)
join' = getCompose . join #(Compose f g) . fmap Compose . Compose
So if the composition FG is a monad, then you can get a natural transformation with the right shape to be a distributive law. However, there are some extra constraints that fall out of making sure your distributive law satisfies the correct properties, vaguely alluded to above. As always, the n-Category Cafe has the gory details.

Implementing `MyMonad Free f`

Typeclassopedia defines the Free monad data type.
data Free f a = Var a
| Node (f (Free f a))
Given:
class (MyMonad m) where
ret :: a -> m a
flatMap :: m a -> (a -> m b) -> m b
Here's my incomplete attempt at implementing the MyMonad instance of this typeclass.
instance Functor f => MyMonad (Free f) where
ret = Var
flatMap (Var x) f = f x
flatMap (Node xs) f = error
Please help me reason about what >>=/binding means over a Free monad.
When I struggled with implementing Applicative (Free f), I was encouraged to try to implement the Monad instance.
In these kinds of situations, typed holes can help with how to proceed. They give information about the type the still unimplemented "hole" should have.
Using a typed hole instead of error in your definition:
instance Functor f => MyMonad (Free f) where
ret = Var
flatMap (Var x) g = f x
flatMap (Node xs) g = _
Gives an error message like (here simplified):
Found hole `_' with type: Free f b
...
Relevant bindings include
g :: a -> Free f b (bound at Main.hs:10:21)
xs :: f (Free f a) (bound at Main.hs:10:17)
flatMap :: Free f a -> (a -> Free f b) -> Free f b
(bound at Main.hs:9:3)
That Free f b in the hole... which constructor should it have? Var or Node?
Now, a value of type Free a is like a tree that has values of type a on the leaves (the Var constructor) and whose branching nodes are "shaped" by the functor f.
What is >>= for Free? Think of it as taking a tree and "grafting" new trees on each of its leaves. These new trees are constructed from the values in the leaves using the function that is passed to >>=.
This helps us continue: now we know that the constructor on the right of the flatMap (Node xs) f = _ pattern must be Node, because "grafting" new things onto the tree never collapses already existing nodes into leaves, it only expands leaves into whole new trees.
Still using type holes:
instance Functor f => MyMonad (Free f) where
ret = Var
flatMap (Var x) g = f x
flatMap (Node xs) g = Node _
Found hole `_' with type: f (Free f b)
...
Relevant bindings include
g :: a -> Free f b (bound at Main.hs:10:21)
xs :: f (Free f a) (bound at Main.hs:10:17)
In xs we have a Free f a wrapped in a f, but f is a functor and we could easily map over it.
But how to convert that Free f a into the Free f b required by the hole? Intuitively, this Free f a will be "smaller" that the one the >>= started with, because we have stripped one "branching node". Maybe it is even a leaf node, like the case covered by the other pattern-match! This suggests using recursion of some kind.
Start by implementing the Functor instance. Then note that in general, a monad can be described as a functor that supports return and join :: m (m a) -> m a. Can you see how to implement join using =<< and return? Can you see how to implement =<< using fmap and join?
Spoilers
As you indicated,
data Free f a = Var a | Node (f (Free f a))
So (as you could work out with typed holes)
join :: Functor f => Free f (Free f a) -> Free f a
join (Var a) = a
join (Node m) = Node (fmap join m)
It might be useful to think about the geometry here. We descend the tree recursively, leaving the structure the same, until we get to the leaves, which we unwrap.
Note: =<< is the flipped version of >>=; it's more consistent with the other composition operators. $, <$>, <*>, ., =<<, and <=< all match up so you can read an expression using them from left to right or right to left without having to switch directions a few times in the middle.

Derivation of Free Monad

Control.Monad.Free implements a free monad as:
data Free f a = Pure a | Free (f (Free f a))
instance Functor f => Functor (Free f) where
fmap f = go where
go (Pure a) = Pure (f a)
go (Free fa) = Free (go <$> fa)
I am having a lot of trouble understanding the second go line, especially in the context of descriptions of what a free monad is. Can somenoe please describe how this works and why it makes Free f a a free monad?
At this point, you're just making Free a functor rather than a monad. Of course, to be a monad, it has to be a functor as well!
I think it would be a little easier to think about if we rename the Free constructor to avoid confusion:
data Free f a = Pure a | Wrap (f (Free f a))
Now let's look at the structure of what we're building up. For the Pure case, we just have a value of type a. For the Wrap case, we have another Free f a value wrapped in the f functor.
Let's ignore the constructors for a second. That is, if we have Wrap (f (Pure a)) let's think of it as f a. This means that the structure we're building up is just f--a functor--applied repeatedly some number of times. Values of this type will look something like: f (f (f (f (f a)))). To make it more concrete, let f be [] to get: [[[[[a]]]]]. We can have as many levels of this as we want by using the Wrap constructor repeatedly; everything ends when we use Pure.
Putting the constructors back in, [[a]] would look like: Wrap [Wrap [Pure a]].
So all we're doing is taking the Pure value and repeatedly applying a functor to it.
Given this structure of a repeatedly applied functor, how would we map a function over it? For the Pure case--before we've wrapped it in f--this is pretty trivial: we just apply the function. But if we've already wrapped our value in f at least once, we have to map over the outer level and then recursively map over all the inner layers. Put another way, we have to map mapping over the Free monad over the functor f.
This is exactly what the second case of go is doing. go itself is just fmap for Free f a. <$> is fmap for f. So what we do is fmap go over f, which makes the whole thing recursive.
Since this mapping function is recursive, it can deal with an arbitrary number of levels. So we can map a function over [[a]] or [[[[a]]]] or whatever. This is why we need to fmap go when go is fmap itself--the important difference being that the first fmap works for a single layer of f and go recursively works for the whole Free f a construction.
I hope this cleared things up a bit.
To tell you the truth, I usually just find it easier not to read the code in these simpler functions, but rather to read the types and then write the function myself. Think of it as a puzzle. You're trying to construct this:
mapFree :: Functor f => (a -> b) -> Free f a -> Free f b
So how do we do it? Well, let's take the Pure constructor first:
mapFree f (Pure a) = ...
-- I like to write comments like these while using Haskell, then usually delete
-- them by the end:
--
-- f :: a -> b
-- a :: a
With the two type comments in there, and knowing the type of Pure, you should see the solution right away:
mapFree f (Pure a) = Pure (f a)
Now the second case:
mapFree f (Free fa) = ...
-- f :: a -> b
-- fa :: Functor f => f (Free f a)
Well, since f is a Functor, we can actually use mapFree to apply mapFree f to the inner component of f (Free f a). So we get:
mapFree f (Free fa) = Free (fmap (mapFree f) fa)
Now, using this definition as the Functor f => Functor (Free f) instance, we get:
instance Functor f => Functor (Free f) where
fmap f (Pure a) = Pure (f a)
fmap f (Free fa) = Free (fmap (fmap f) fa)
With a bit of work, you can verify that the definition we just arrived at here is the same thing as the one you're puzzling over. (As others have mentioned, (<$>) (defined in Control.Applicative) is just a synonym for fmap.) You may still not understand it, but you managed to write it, which for types as abstract as these is very often good enough.
As for understanding it, though, the thing that helps me is the following: think of a Free monad as a sort of list-like structure, with Pure as [] and Free as (:). From the definition of the type you should see this: Pure is the base case, and Free is the recursive case. What the fmap instance is doing is "pushing" the mapped function to the bottom of this structure, to where the Pure lives.
Since I am confused myself, I answer with a question...could this be a correct substitution (relying on Tikhon's Wrap clarification)?
...
fmap g = go where
go (Pure a) = Pure (g a)
go (Wrap fa) = Wrap (go <$> fa)
Substituting "fmap g" for "go", and "fmap" for "<$>" (since "<$>" is infix,
we flip "go" and "<$>"):
fmap g (Pure a) = Pure (g a)
fmap g (Wrap fa) = Wrap (fmap (fmap g) fa)
Substituting "f (Free f a)" for "fa" in the last line (from the first data
declaration):
fmap g (Wrap fa) = Wrap ( fmap (fmap g) (f (Free f a)) )
= Wrap ( f ( fmap g (Free f a) ) )
= wrap ( f (Pure (g a) ) ) --if Free f a is Pure
or
= Wrap ( f ( fmap g (Wrap fa') ) ) --if Free f a is Wrap
The last line includes the recursion "fmap g (Wrap fa')", which would continue
unless Pure is encountered.

Resources