Why can't I generalize this from Monad to Applicative? - haskell

I generalized hoistFree from the free package to hoistFreeM, similarly to how one can generalize fmap to Data.Traversable.mapM.
import Control.Monad
import Control.Monad.Free
import Data.Traversable as T
hoistFreeM :: (Traversable g, Monad m) =>
(forall a. f a -> m (g a)) -> Free f b -> m (Free g b)
hoistFreeM f = go
where go (Pure x) = return $ Pure x
go (Free xs) = liftM Free $ T.mapM go =<< f xs
However, I don't think there is a way to further generalize it to work with any Applicative, similarly to how one can generalize Data.Traversable.mapM to Data.Traversable.traverse. Am I correct? If so, why?

You can't lift an Applicative through a Free Monad because the Monad structure demands choice (via (>>=) or join) and the Applicative can't provide that. But, perhaps unsurprisingly, you can lift an Applicative through a Free Applicative
-- also from the `free` package
data Ap f a where
Pure :: a -> Ap f a
Ap :: f a -> Ap f (a -> b) -> Ap f b
hoistAp :: (forall a. f a -> g a) -> Ap f b -> Ap g b
hoistAp _ (Pure a) = Pure a
hoistAp f (Ap x y) = Ap (f x) (hoistAp f y)
hoistApA :: Applicative v => (forall a. f a -> v (g a)) -> Ap f b -> v (Ap g b)
hoistApA _ (Pure a) = pure (Pure a)
hoistApA f (Ap x y) = Ap <$> f x <*> hoistApA f y
-- just what you'd expect, really
To be more explicit, let's try generalizing hoistFreeM to hoistFreeA. It's easy enough to begin
hoistFreeA :: (Traversable f, Applicative v) =>
(forall a. f a -> v (g a)) -> Free f b -> v (Free g b)
hoistFreeA _ (Pure a) = pure (Pure a)
And we can try to continue by analogy from hoistFreeM here. mapM becomes traverse and we can get as far as
hoistFreeA f (Free xs) = ?f $ traverse (hoistFreeA f) xs
where I've been using ?f as a makeshift type hole to try to figure out how to move forward. We can complete this definition if we can make
?f :: v (f (Free g b)) -> v (Free g b)
In other words, we need to transform that f layer into a g layer while living underneath our v layer. It's easy enough to get underneath v since v is a Functor, but the only way we have to transform f a to g a is our argument function forall a . f a -> v (g a).
We can try applying that f anyway along with a Free wrapper in order to fold up our g layer.
hoistFreeA f (Free xs) = ?f . fmap (fmap Free . f) $ traverse (hoistFreeA f) xs
But now we have to solve
?f :: v (v (Free g b)) -> v (Free g b)
which is just join, so we're stuck. This is fundamentally where we're always going to get stuck. Free Monads model Monads and thus in order to wrap over them we need to somehow join or bind.

Related

Composing Applicatives

I'm reading through Chapter 25 (Composing Types) of the haskellbook, and wish to understand applicative composition more completely
The author provides a type to embody type composition:
newtype Compose f g a =
Compose { getCompose :: f (g a) }
deriving (Eq, Show)
and supplies the functor instance for this type:
instance (Functor f, Functor g) =>
Functor (Compose f g) where
fmap f (Compose fga) =
Compose $ (fmap . fmap) f fga
But the Applicative instance is left as an exercise to the reader:
instance (Applicative f, Applicative g) =>
Applicative (Compose f g) where
-- pure :: a -> Compose f g a
pure = Compose . pure . pure
-- (<*>) :: Compose f g (a -> b)
-- -> Compose f g a
-- -> Compose f g b
Compose fgf <*> Compose fgx = undefined
I can cheat and look the answer up online... The source for Data.Functor.Compose provides the applicative instance definition:
Compose f <*> Compose x = Compose ((<*>) <$> f <*> x)
but I'm having trouble understanding what's going on here. According to type signatures, both f and x are wrapped up in two layers of applicative structure. The road block I seem to be hitting though is understanding what's going on with this bit: (<*>) <$> f. I will probably have follow up questions, but they probably depend on how that expression is evaluated. Is it saying "fmap <*> over f" or "apply <$> to f"?
Please help to arrive at an intuitive understanding of what's happening here.
Thanks! :)
Consider the expression a <$> b <*> c. It means take the function a, and map it over the functor b, which will yield a new functor, and then map that new functor over the functor c.
First, imagine that a is (\x y -> x + y), b is Just 3, and c is Just 5. a <$> b then evaluates to Just (\y -> 3 + y), and a <$> b <*> c then evaluates to Just 8.
(If what's before here doesn't make sense, then you should try to understand single layers of applicatives further before you try to understand multiple layers of them.)
Similarly, in your case, a is (<*>), b is f, and c is x. If you were to choose suitable values for f and x, you'd see that they can be easily evaluated as well (though be sure to keep your layers distinct; the (<*>) in your case belongs to the inner Applicative, whereas the <$> and <*> belong to the outer one).
Rather than <*>, you can define liftA2.
import Control.Applicative (Applicative (..))
newtype Compose f g a = Compose
{ getCompose :: f (g a) }
deriving Functor
instance (Applicative f, Applicative g) => Applicative (Compose f g) where
pure a = Compose (pure (pure a))
-- liftA2 :: (a -> b -> c) -> Compose f g a -> Compose f g b -> Compose f g c
liftA2 f (Compose fga) (Compose fgb) = Compose _1
We have fga :: f (g a) and fgb :: f (g b) and we need _1 :: f (g c). Since f is applicative, we can combine those two values using liftA2:
liftA2 f (Compose fga) (Compose fgb) = Compose (liftA2 _2 fga fgb)
Now we need
_2 :: g a -> g b -> g c
Since g is also applicative, we can use its liftA2 as well:
liftA2 f (Compose fga) (Compose fgb) = Compose (liftA2 (liftA2 f) fga fgb)
This pattern of lifting liftA2 applications is useful for other things too. Generally speaking,
liftA2 . liftA2 :: (Applicative f, Applicative g) => (a -> b -> c) -> f (g a) -> f (g b) -> f (g c)
liftA2 . liftA2 . liftA2
:: (Applicative f, Applicative g, Applicative h)
=> (a -> b -> c) -> f (g (h a)) -> f (g (h b)) -> f (g (h c))

Is there an efficient, lazy way to fuse foldMap with traverse?

A recent proposal on the Haskell libraries mailing list led me to consider the following:
ft :: (Applicative f, Monoid m, Traversable t)
-> (b -> m) -> (a -> f b) -> t a -> f m
ft f g xs = foldMap f <$> traverse g xs
I noticed that the Traversable constraint can be weakened to Foldable:
import Data.Monoid (Ap (..)) -- Requires a recent base version
ft :: (Applicative f, Monoid m, Foldable t)
-> (b -> m) -> (a -> f b) -> t a -> f m
ft f g = getAp . foldMap (Ap . fmap f . g)
In the original proposal, f was supposed to be id, leading to
foldMapA
:: (Applicative f, Monoid m, Foldable t)
-> (a -> f m) -> t a -> f m
--foldMapA g = getAp . foldMap (Ap . fmap id . g)
foldMapA g = getAp . foldMap (Ap . g)
which is strictly better than the traverse-then-fold approach.
But in the more general ft, there's a potential problem: fmap could be expensive in the f functor, in which case the fused version could potentially be more expensive than the original!
The usual tools for dealing with expensive fmap are Yoneda and Coyoneda. Since we need to lift many times and only lower once, Coyoneda is the one that can help us:
import Data.Functor.Coyoneda
ft' :: (Applicative f, Monoid m, Foldable t)
=> (b -> m) -> (a -> f b) -> t a -> f m
ft' f g = lowerCoyoneda . getAp
. foldMap (Ap . fmap f . liftCoyoneda . g)
So now we replace all those expensive fmaps with one (buried in lowerCoyoneda). Problem solved? Not quite.
The trouble with Coyoneda is that its liftA2 is strict. So if we write something like
import Data.Monoid (First (..))
ft' (First . Just) Identity $ 1 : undefined
-- or, importing Data.Functor.Reverse,
ft' (Last . Just) Identity (Reverse $ 1 : undefined)
then it will fail, whereas ft has no trouble with those. Is there a way to have our cake and eat it too? That is, a version that uses only a Foldable constraint, only fmaps O(1) times more than traverse in the f functor, and is just as lazy as ft?
Note: we could make liftA2 for Coyoneda somewhat lazier:
liftA2 f m n = liftCoyoneda $
case (m, n) of
(Coyoneda g x, Coyoneda h y) -> liftA2 (\p q -> f (g p) (h q)) x y
This is enough to let it produce an answer to ft' (First . Just) Identity $ 1 : 2 : undefined, but not to ft' (First . Just) Identity $ 1 : undefined. I don't see any obvious way to make it lazier than that, because pattern matches on existentials must always be strict.
I don't believe it's possible. Avoiding fmaps at the elements seems to require some knowledge of the structure of the container. For example, the Traversable instance for lists can be written
traverse f (x : xs) = liftA2 (:) (f x) (traverse f xs)
We know that the first argument of (:) is a single element, so we can use liftA2 to combine the process of mapping over the action for that element with the process of combining the result of that action with the result associated with the rest of the list.
In a more generic context, the structure of a fold can be captured faithfully using a magma type with a bogus Monoid instance:
data Magma a = Bin (Magma a) (Magma a) | Leaf a | Nil
deriving (Functor, Foldable, Traversable)
instance Semigroup (Magma a) where
(<>) = Bin
instance Monoid (Magma a) where
mempty = Nil
toMagma :: Foldable t => t a -> Magma a
toMagma = foldMap Leaf
We can write
ft'' :: (Applicative f, Monoid m, Foldable t)
=> (b -> m) -> (a -> f b) -> t a -> f m
ft'' f g = fmap (lowerMagma f) . traverse g . toMagma
lowerMagma :: Monoid m => (a -> m) -> Magma a -> m
lowerMagma f (Bin x y) = lowerMagma f x <> lowerMagma f y
lowerMagma f (Leaf x) = f x
lowerMagma _ Nil = mempty
But there's trouble in the Traversable instance:
traverse f (Leaf x) = Leaf <$> f x
That's exactly the sort of trouble we were trying to avoid. And there's no lazy fix for it. If we encounter Bin l r, we can't lazily determine whether l or r are leaves. So we're stuck. If we allowed a Traversable constraint on ft'', we could capture the result of traversing with a richer sort of magma type (such as one used in lens), which I suspect could let us do something more clever though I haven't found anything yet.

Granted a traversable F-Algebra, is it possible to have a catamorphism over an applicative algebra?

I have this F-Algebra (introduced in a previous question), and I want to cast an effectful algebra upon it. Through desperate trial, I managed to put together a monadic catamorphism that works. I wonder if it may be generalized to an applicative, and if not, why.
This is how I defined Traversable:
instance Traversable Expr where
traverse f (Branch xs) = fmap Branch $ traverse f xs
traverse f (Leaf i ) = pure $ Leaf i
This is the monadic catamorphism:
type AlgebraM a f b = a b -> f b
cataM :: (Monad f, Traversable a) => AlgebraM a f b -> Fix a -> f b
cataM f x = f =<< (traverse (cataM f) . unFix $ x)
And this is how it works:
λ let printAndReturn x = print x >> pure x
λ cataM (printAndReturn . evalSum) $ branch [branch [leaf 1, leaf 2], leaf 3]
1
2
3
3
6
6
My idea now is that I could rewrite like this:
cataA :: (Applicative f, Traversable a) => AlgebraM a f b -> Fix a -> f b
cataA f x = do
subtree <- traverse (cataA f) . unFix $ x
value <- f subtree
return value
Unfortunately, value here depends on subtree and, according to a paper on applicative do-notation, in such case we cannot desugar to Applicative. It seems like there's no way around this; we need a monad to float up from the depths of nesting.
Is it true? Can I safely conclude that only flat structures can be folded with applicative effects alone?
Can I safely conclude that only flat structures can be folded with applicative effects alone?
You can say that again! After all, "flattening nested structures" is exactly what makes a monad a monad, rather than Applicative which can only combine adjacent structures. Compare (a version of) the signatures of the two abstractions:
class Functor f => Applicative f where
pure :: a -> f a
(<.>) :: f a -> f b -> f (a, b)
class Applicative m => Monad m where
join :: m (m a) -> m a
What Monad adds to Applicative is the ability to flatten nested ms into one m. That's why []'s join is concat. Applicative only lets you smash together heretofore-unrelated fs.
It's no coincidence that the free monad's Free constructor contains a whole f full of Free fs, whereas the free applicative's Ap constructor only contains one Ap f.
data Free f a = Return a | Free (f (Free f a))
data Ap f a where
Pure :: a -> Ap f a
Cons :: f a -> Ap f b -> Ap f (a, b)
Hopefully that gives you some intuition as to why you should expect that it's not possible to fold a tree using an Applicative.
Let's play a little type tennis to see how it shakes out. We want to write
cataA :: (Traversable f, Applicative m) => (f a -> m a) -> Fix f -> m a
cataA f (Fix xs) = _
We have xs :: f (Fix f) and a Traversable for f. My first instinct here is to traverse the f to fold the contained subtrees:
cataA f (Fix xs) = _ $ traverse (cataA f) xs
The hole now has a goal type of m (f a) -> m a. Since there's an f :: f a -> m a knocking about, let's try going under the m to convert the contained fs:
cataA f (Fix xs) = _ $ fmap f $ traverse (cataA f) xs
Now we have a goal type of m (m a) -> m a, which is join. So you do need a Monad after all.

Analog of free monads for Profunctors

We can define data Free f a = Pure a | Free (f (Free f a)) and so have Functor f => Monad (Free f).
If we define
data T f a b = R a | S b | T (f a (T f a b)) have we some analogous M so Profunctor f => M (T f a), where class Profunctor f where dimap :: (a -> b) -> (c -> d) -> f b c -> f a d?
I've been wondering ever since i noted Data.Comp.Term.Context and Free are isomorphic about a potential analog for Data.Comp.Param.Term.Context.
There's a more appropriate notion of making a free thing from a profunctor. Then we can work by analogy.
A free monoid, Y, generated by a set X is can be thought of as the solution to the equation "Y=1+XY". In Haskell notation that is
data List a = Nil | Cons a (List a)
A free monad, M, generated by the functor F can be thought of as the solution to the equation "M=1+FM" where the product "FM' is the composition of functors. 1 is just the identity functor. In Haskell notation that is
data Free f a = Pure a | Free (f (Free a))
Making something free from a profunctor P should look like a solution, A, to "A=1+PA". The product "PA" is the standard composition of profunctors. The 1 is the "identity" profunctor, (->). So we get
data Free p a b = Pure (a -> b) | forall x.Free (p a x) (Free p x b)
This is also a profunctor:
instance Profunctor b => Profunctor (Free b) where
lmap f (Pure g) = Pure (g . f)
lmap f (Free g h) = Free (lmap f g) h
rmap f (Pure g) = Pure (f . g)
rmap f (Free g h) = Free g (rmap f h)
If the profunctor is strong then so is the free version:
instance Strong p => Strong (Free p) where
first' (Pure f) = Pure (first' f)
first' (Free f g) = Free (first' f) (first' g)
But what actually is Free p? It's actually a thing called a pre-arrow. Restricting, free strong profunctors are arrows:
instance Profunctor p => Category (Free p) where
id = Pure id
Pure f . Pure g = Pure (f . g)
Free g h . Pure f = Free (lmap f g) h
Pure f . Free g h = Free g (Pure f . h)
f . Free g h = Free g (f . h)
instance (Profunctor p, Strong p) => Arrow (Free p) where
arr = Pure
first = first'
Intuitively you can think of an element of a profunctor P a b as taking an a-ish thing to a b-ish thing, the canonical example being given by (->). Free P is an unevaluated chain of these elements with compatible (but unobservable) intermediate types.
So i think i figured it out: M ~ Monad ☺
instance Profunctor f => Functor (T f a) where
fmap f (In m) = In (dimap id (fmap f) m)
fmap f (Hole x) = Hole (f x)
fmap f (Var v) = Var v
instance Profunctor f => Applicative (T f a) where
pure = Hole
(<*>) = ap
instance Profunctor f => Monad (T f a) where
In m >>= f = In ((>>= f) <$> m)
Hole x >>= f = f x
Var v >>= _ = Var v
Seems obvious in hindthought.

Parse error in pattern: f . g in fmap (f . g) = fmap f . fmap g

Parse error in pattern: f . g
i am a beginner, where is wrong?
(f . g) x = f (g x)
class Functor f where
fmap :: (a -> b) -> f a -> f b
class Functor g where
fmap :: (a -> b) -> f a -> f b
instance Functor F where
fmap id = id
fmap (f . g) = fmap f . fmap g
When you make an instance of Functor, you should prove the side condition that
fmap id = id
and
fmap (f . g) = fmap f . fmap g
(Technically the latter comes for free given the types involved and the former law, but it is still a good exercise.)
You can't do this just by saying
fmap id = id
but instead you use this as a reasoning tool -- once you have proven it.
That said, the code that you have written doesn't make sense for a number of reasons.
(f . g) x = f (g x)
Since this is indented, I'm somewhat unclear if this is intended to be a definition for (.), but that is already included in the Prelude, so you need not define it again.
class Functor f where
fmap :: (a -> b) -> f a -> f b
This definition is also provided for you in the Prelude.
class Functor g where
fmap :: (a -> b) -> f a -> f b
But then you define the class again, but here it has mangled the signature of fmap, which would have to be
fmap :: (a -> b) -> g a -> g b
But as you have another definition of Functor right above (and the Prelude has still another, you couldn't get that to compile)
Finally, your
instance Functor F where
fmap id = id
fmap (f . g) = fmap f . fmap g
makes up a name F for a type that you want to make into an instance of Functor, and then tries to give the laws as an implementation, which isn't how it works.
Let us take an example of how it should work.
Consider a very simple functor:
data Pair a = Pair a a
instance Functor Pair where
fmap f (Pair a b) = Pair (f a) (f b)
now, to prove fmap id = id, let us consider what fmap id and id do pointwise:
fmap id (Pair a b) = -- by definition
Pair (id a) (id b) = -- by beta reduction
Pair a (id b) = -- by beta reduction
Pair a b
id (Pair a b) = -- by definition
Pair a b
So, fmap id = id in this particular case.
Then you can check (though technically given the above, you don't have to) that fmap f . fmap g = fmap (f . g)
(fmap f . fmap g) (Pair a b) = -- definition of (.)
fmap f (fmap g (Pair a b)) = -- definition of fmap
fmap f (Pair (g a) (g b)) = -- definition of fmap
Pair (f (g a)) (f (g b))
fmap (f . g) (Pair a b) = -- definition of fmap
Pair ((f . g) a) ((f . g) b) = -- definition of (.)
Pair (f (g a)) ((f . g) b) = -- definition of (.)
Pair (f (g a)) (f (g b))
so fmap f . fmap g = fmap (f . g)
Now, you can make function composition into a functor.
class Functor f where
fmap :: (a -> b) -> f a -> f b
by partially applying the function arrow constructor.
Note that a -> b and (->) a b mean the same thing, so when we say
instance Functor ((->) e) where
the signature of fmap specializes to
fmap {- for (->) e -} :: (a -> b) -> (->) e a -> (->) e b
which once you have flipped the arrows around looks like
fmap {- for (->) e -} :: (a -> b) -> (e -> a) -> e -> b
but this is just the signature for function composition!
So
instance Functor ((->)e) where
fmap f g x = f (g x)
is a perfectly reasonable definition, or even
instance Functor ((->)e) where
fmap = (.)
and it actually shows up in Control.Monad.Instances.
So all you need to use it is
import Control.Monad.Instances
and you don't need to write any code to support this at all and you can use fmap as function composition as a special case, so for instance
fmap (+1) (*2) 3 =
((+1) . (*2)) 3 =
((+1) ((*2) 3)) =
((+1) (3 * 2)) =
3 * 2 + 1 =
7
Since . is not a data constructor you cannot use it for pattern matching I believe. As far as I can tell there isn't an easy way to do what you're trying, although I'm pretty new to Haskell as well.
let is not used for top-level bindings, just do:
f . g = \x -> f (g x)
But the complaint, as cobbal said, is about fmap (f . g), which isn't valid. Actually, that whole class Functor F where is screwy. The class is already declared, now I think you want to make and instance:
instance Functor F where
fmap SomeConstructorForF = ...
fmap OtherConstructorForF = ...
etc.

Resources