I am working my way through this tutorial. As documented in the tutorial, I copied some code as below, to represent functor composition and identity functor:
{-# LANGUAGE FlexibleContexts #-}
module Test where
newtype FComp f g a = C { unC :: f (g a) }
instance (Show (f (g a))) => Show (FComp f g a) where
show (C x) = "FComp " ++ show x
instance (Functor f, Functor g) => Functor (FComp f g) where
fmap h (C x) = C (fmap (fmap h) x)
newtype Id a = Identity { unId :: a } deriving Show
instance Functor Id where
fmap f x = Identity (f (unId x))
Now, this is what the tutorial states about identity functor:
Composition with the identity functor in the same category is as expected.
F∘IdB = F
IdA∘F = F
What I am stuck at is trying to think of it in terms of functor composition as represented by FComp in the code above. An example below:
$ let a = C (Identity (Just (5::Int)))
$ :t a
a :: FComp Id Maybe Int
$ let b = C (Just (Identity (5::Int)))
$ :t b
b :: FComp Maybe Id Int
I can't think of a way to assert that types of a and b as represented in example above are identical. I will appreciate pointers on how to think of identity functor in terms of functor composition.
Like many equations in Haskell-applied category theory, F ∘ IdB ≡ IdA ∘ F ≡ F should really just be read as equivalences. FComp Id Maybe Int is very much not the same thing as FComp Maybe Id Int to the type checker; however you can easily write
idFunctorIso :: Functor f => FComp f Id a -> f a
idFunctorIso (C fIdca) = fmap unId fIdca
idFunctorIso' :: Functor f => f a -> FComp f Id a
idFunctorIso' fa = C $ fmap Identity fIdc
which implies that both types contain the same information1. That's what we mean by saying they're isomorphic.
1No information is lost in either direction, since idFunctorIso' . idFunctorIso ≡ id (as follows from the functor law fmap id ≡ id, together with the fact that unC and unId are simple inverses of the newtype constructors).
Related
I have a Bitraversable called t that supports this operation:
someName :: Monad m => (t (m a) (m b) -> c) -> m (t a b) -> c
In other words, it's possible to take a function that accepts two monads packaged into the bitraversable and turn it into a mapping that accepts a single monad containing a bitraversable without the monad layer. This is something like a bitraversable and higher-level version of distribute; the type signature is similar to this:
\f -> \x -> f (distribute x)
:: (Distributive g, Functor f) => (g (f a) -> c) -> f (g a) -> c
My questions:
Is there a standard name for this "higher-level" version of distribute that works on functions that accept distributives rather than distributives themselves?
Is there a name for the bitraversable version?
Does it work with every bitraversable/functor/monad/whatever, or are there restrictions?
As per #Noughtmare, your "higher level" functions someName and distribute are just written in continuation passing style. These generally aren't worth additional names, because they are just right function compositions:
highLevelDistribute = (. distribute)
Practically speaking, anywhere you want to call highLevelDistribute on an argument:
highLevelDistribute f
this expression is equivalent to:
f . distribute
and even if you're using highLevelDistribute as a first-class value, it's just not that hard to write and understand the section (. distribute).
Note that traverse and sequenceA are a little different, since we have:
sequenceA = traverse id
You could make an argument that this difference doesn't really warrant separate names either, but that's an argument for another day.
Getting back to someName, it's a CPS version of:
someOtherName :: m (t a b) -> t (m a) (m b)
which looks like a bifunctor analogue of distribute:
distribute :: (Distributive g, Functor f) => f (g a) -> g (f a)
So, I'd suggest inventing a Bidistributive to reflect this, and someOtherName becomes bidistribute:
class Bifunctor g => Bidistributive g where
{-# MINIMAL bidistribute | bicollect #-}
bidistribute :: Functor f => f (g a b) -> g (f a) (f b)
bidistribute = bicollect id
bicollect :: Functor f => (a -> g b c) -> f a -> g (f b) (f c)
bicollect f = bidistribute . fmap f
Again, your "higher level" someName is just right-composition:
someName = (. bidistribute)
Reasonable laws for a Bidistributive would probably include the following. I'm not sure if these are sufficiently general and/or exhaustive:
-- naturality
bimap (fmap f) (fmap g) . bidistribute = bidistribute . fmap (bimap f g)
-- identity
bidistribute . Identity = bimap Identity Identity
-- composition
bimap Compose Compose . bidistribute . fmap bidistribute = bidistribute . Compose
For your question #3, not all Bitraversables are Bidistributive, for much the same reason that not all Traversables are Distributive. A Distributive allows you to "expose structure" under an arbitrary functor. So, for example, there's no Distributive instance for lists, because if there was, you could call:
distribute :: IO [a] -> [IO a]
which would allow you to determine if a list returned by an IO action was empty or not, without executing the IO action.
Similarly, Either is Bitraversable, but it can't be Bidistributive, because if it was, you'd be able to use:
bidistribute :: IO (Either a b) -> Either (IO a) (IO b)
to determine if the IO action returned a Left or Right without having to execute the IO action.
One interesting thing about bidistribute is that the "other functor" can be any Functor; it doesn't need to be an Applicative. So, just as we have:
sequenceA :: (Traversable t, Applicative f) => t (f a) -> f (t a)
distribute :: (Distributive g, Functor f) => f (g a) -> g (f a)
we have:
bisequence :: (Bitraversable t, Applicative f) => t (f a) (f b) -> f (t a b)
bidistribute :: (Bidistributive g, Functor f) => f (g a b) -> g (f a) (f b)
Intuitively, sequencing needs the power of an applicative functor f to be able to "build" the f (t a) from a traversal of its functorial f a "parts", while distribution only needs to take the f (g a) apart. In practical terms, this means that sequencing typically looks like this:
-- specialized to t ~ []
sequenceA :: [f a] -> f [a]
sequenceA (f:fs) = (:) <$> f <*> fs -- need applicative operations
while distribution typically looks like this:
-- specialized to g ~ (->) r
distribute :: f (r -> a) -> (r -> f a)
distribute f r = fmap ($ r) f -- only need fmap
(Technically, according to the documentation for Data.Distributive, the Distributive class only requires a Functor rather than some coapplicative class because of the lack of non-trivial comonoids in Haskell. See this SO answer.)
A recent proposal on the Haskell libraries mailing list led me to consider the following:
ft :: (Applicative f, Monoid m, Traversable t)
-> (b -> m) -> (a -> f b) -> t a -> f m
ft f g xs = foldMap f <$> traverse g xs
I noticed that the Traversable constraint can be weakened to Foldable:
import Data.Monoid (Ap (..)) -- Requires a recent base version
ft :: (Applicative f, Monoid m, Foldable t)
-> (b -> m) -> (a -> f b) -> t a -> f m
ft f g = getAp . foldMap (Ap . fmap f . g)
In the original proposal, f was supposed to be id, leading to
foldMapA
:: (Applicative f, Monoid m, Foldable t)
-> (a -> f m) -> t a -> f m
--foldMapA g = getAp . foldMap (Ap . fmap id . g)
foldMapA g = getAp . foldMap (Ap . g)
which is strictly better than the traverse-then-fold approach.
But in the more general ft, there's a potential problem: fmap could be expensive in the f functor, in which case the fused version could potentially be more expensive than the original!
The usual tools for dealing with expensive fmap are Yoneda and Coyoneda. Since we need to lift many times and only lower once, Coyoneda is the one that can help us:
import Data.Functor.Coyoneda
ft' :: (Applicative f, Monoid m, Foldable t)
=> (b -> m) -> (a -> f b) -> t a -> f m
ft' f g = lowerCoyoneda . getAp
. foldMap (Ap . fmap f . liftCoyoneda . g)
So now we replace all those expensive fmaps with one (buried in lowerCoyoneda). Problem solved? Not quite.
The trouble with Coyoneda is that its liftA2 is strict. So if we write something like
import Data.Monoid (First (..))
ft' (First . Just) Identity $ 1 : undefined
-- or, importing Data.Functor.Reverse,
ft' (Last . Just) Identity (Reverse $ 1 : undefined)
then it will fail, whereas ft has no trouble with those. Is there a way to have our cake and eat it too? That is, a version that uses only a Foldable constraint, only fmaps O(1) times more than traverse in the f functor, and is just as lazy as ft?
Note: we could make liftA2 for Coyoneda somewhat lazier:
liftA2 f m n = liftCoyoneda $
case (m, n) of
(Coyoneda g x, Coyoneda h y) -> liftA2 (\p q -> f (g p) (h q)) x y
This is enough to let it produce an answer to ft' (First . Just) Identity $ 1 : 2 : undefined, but not to ft' (First . Just) Identity $ 1 : undefined. I don't see any obvious way to make it lazier than that, because pattern matches on existentials must always be strict.
I don't believe it's possible. Avoiding fmaps at the elements seems to require some knowledge of the structure of the container. For example, the Traversable instance for lists can be written
traverse f (x : xs) = liftA2 (:) (f x) (traverse f xs)
We know that the first argument of (:) is a single element, so we can use liftA2 to combine the process of mapping over the action for that element with the process of combining the result of that action with the result associated with the rest of the list.
In a more generic context, the structure of a fold can be captured faithfully using a magma type with a bogus Monoid instance:
data Magma a = Bin (Magma a) (Magma a) | Leaf a | Nil
deriving (Functor, Foldable, Traversable)
instance Semigroup (Magma a) where
(<>) = Bin
instance Monoid (Magma a) where
mempty = Nil
toMagma :: Foldable t => t a -> Magma a
toMagma = foldMap Leaf
We can write
ft'' :: (Applicative f, Monoid m, Foldable t)
=> (b -> m) -> (a -> f b) -> t a -> f m
ft'' f g = fmap (lowerMagma f) . traverse g . toMagma
lowerMagma :: Monoid m => (a -> m) -> Magma a -> m
lowerMagma f (Bin x y) = lowerMagma f x <> lowerMagma f y
lowerMagma f (Leaf x) = f x
lowerMagma _ Nil = mempty
But there's trouble in the Traversable instance:
traverse f (Leaf x) = Leaf <$> f x
That's exactly the sort of trouble we were trying to avoid. And there's no lazy fix for it. If we encounter Bin l r, we can't lazily determine whether l or r are leaves. So we're stuck. If we allowed a Traversable constraint on ft'', we could capture the result of traversing with a richer sort of magma type (such as one used in lens), which I suspect could let us do something more clever though I haven't found anything yet.
I'm trying to understand the explaination in Monads made difficult and I have a hard time figuring out the following newtype definition:
newtype (FComp g f) x = FComp { unCompose :: g (f x) }
instance (Functor b c f, Functor a b g) => Functor a c (FComp g f) where
fmap f (FComp xs) = FComp $ fmap (fmap f) xs
I have nowhere seen an explaination of what newtype means with an expression in parentheses in place of the type declaration. I therefore cannot figure out what the definition of the fmap function means. I also don't understand why the unCompose field accessor is defined but never used. I feel like I am missing some basic semantics of newtype.
A little test:
newtype (FComp g f) x = FComp { unCompose :: g (f x) }
newtype FComp2 g f x = FComp2 { unCompose2 :: g (f x) }
*Main> :i FComp
newtype FComp g f x = FComp {unCompose :: g (f x)}
-- Defined at Test.hs:34:10
*Main> :i FComp2
newtype FComp2 g f x = FComp2 {unCompose2 :: g (f x)}
-- Defined at Test.hs:35:9
So the parentheses really don't change anything. It's just the same as without them.
As for the uncompose, it's just a name to unwrap the newtype without making the data constructor explicit. In the snippet you posted they use pattern matching, but one wouldn't want to export the implementation details, so unCompose is provided to use the contents of FComp. This is just the same as in data definitions, only that newtype wants exactly one field instead of 0..n.
You could write this:
newtype (FComp g f) x = FComp { unCompose :: g (f x) }
like so:
newtype FComp g f x = FComp (g (f x))
unCompose (FComp it) = it
This is so because type application has the same syntactic properties as ordinary applications, i.e.:
a b c = (a b) c
holds for values a,b,c and for types a,b,c.
In category theory, a monad can be constructed from two adjoint functors. In particular, if C and D are categories and F : C --> D and G : D --> C are adjoint functors, in the sense that there is a bijection
hom(FX,Y) = hom(X,GY)
for each X in C and Y in D then the composition G o F : C --> C is a monad.
One such pair of adjoint functors can be given by fixing a type b and taking F and G to be
data F b a = F (a,b)
data G b a = G (b -> a)
instance Functor (F b) where
fmap f (F (a,b)) = F (f a, b)
instance Functor (G b) where
fmap f (G g) = G (f . g)
and the bijection between hom-sets is given (modulo constructors) by currying:
iso1 :: (F b a -> c) -> a -> G b c
iso1 f = \a -> G $ \b -> f (F (a,b))
iso2 :: (a -> G b c) -> F b a -> c
iso2 g = \(F (a,b)) -> let (G g') = g a in g' b
in which case the corresponding monad is
data M b a = M { unM :: b -> (a,b) }
instance Monad (M b) where
return a = M (\b -> (a,b))
(M f) >>= g = M (\r -> let (a,r') = f r in unM (g r') a)
I don't know what the name for this monad should be, except that it seems to be something like a reader monad that carries around a piece of over-writeable information (edit: dbaupp points out in the comments that this is the State monad.)
So the State monad can be "decomposed" as the pair of adjoint functors F and G, and we could write
State = G . F
So far, so good.
I'm now trying to figure out how to decompose other common monads into pairs of adjoint functors - for example Maybe, [], Reader, Writer, Cont - but I can't figure out what the pairs of adjoint functors that we can "decompose" them into are.
The only simple case seems to be the Identity monad, which can be decomposed into any pair of functors F and G such that F is inverse to G (in particularly, you could just take F = Identity and G = Identity).
Can anyone shed some light?
What you're looking for is Kleisli category. It was originally developed to show that every monad can be constructed from two adjoint functors.
The problem is that Haskell Functor is not a generic functor, it's an endo-functor in the Haskell category. So we need something different (AFAIK) to represent functors between other categories:
{-# LANGUAGE FunctionalDependencies, KindSignatures #-}
import Control.Arrow
import Control.Category hiding ((.))
import qualified Control.Category as C
import Control.Monad
class (Category c, Category d) => CFunctor f c d | f -> c d where
cfmap :: c a b -> d (f a) (f b)
Notice that if we take -> for both c and d we get an endo-functor of the Haskell category, which is just the type of fmap:
cfmap :: (a -> b) -> (f a -> f b)
Now we have explicit type class that represents functors between two given categories c and d and we can express the two adjoint functors for a given monad. The left one maps an object a to just a and maps a morphism f to (return .) f:
-- m is phantom, hence the explicit kind is required
newtype LeftAdj (m :: * -> *) a = LeftAdj { unLeftAdj :: a }
instance Monad m => CFunctor (LeftAdj m) (->) (Kleisli m) where
cfmap f = Kleisli $ liftM LeftAdj . return . f . unLeftAdj
-- we could also express it as liftM LeftAdj . (return .) f . unLeftAdj
The right one maps an object a to object m a and maps a morphism g to join . liftM g, or equivalently to (=<<) g:
newtype RightAdj m a = RightAdj { unRightAdj :: m a }
instance Monad m => CFunctor (RightAdj m) (Kleisli m) (->) where
cfmap (Kleisli g) = RightAdj . join . liftM g . unRightAdj
-- this can be shortened as RightAdj . (=<<) g . unRightAdj
(If anybody know a better way how to express this in Haskell, please let me know.)
Maybe comes from the free functor into the category of pointed sets and the forgetful functor back
[] comes from the free functor into the category of monoids and the forgetful functor back
But neither of these categories are subcategories of Hask.
As you observe, every pair of adjoint functors gives rise to a monad. The converse holds too: every monad arises in that way. In fact, it does so in two canonical ways. One is the Kleisli construction Petr describes; the other is the Eilenberg-Moore construction. Indeed, Kleisli is the initial such way and E-M the terminal one, in a suitable category of pairs of adjoint functors. They were discovered independently in 1965. If you want the details, I highly recommend the Catsters videos.
Parse error in pattern: f . g
i am a beginner, where is wrong?
(f . g) x = f (g x)
class Functor f where
fmap :: (a -> b) -> f a -> f b
class Functor g where
fmap :: (a -> b) -> f a -> f b
instance Functor F where
fmap id = id
fmap (f . g) = fmap f . fmap g
When you make an instance of Functor, you should prove the side condition that
fmap id = id
and
fmap (f . g) = fmap f . fmap g
(Technically the latter comes for free given the types involved and the former law, but it is still a good exercise.)
You can't do this just by saying
fmap id = id
but instead you use this as a reasoning tool -- once you have proven it.
That said, the code that you have written doesn't make sense for a number of reasons.
(f . g) x = f (g x)
Since this is indented, I'm somewhat unclear if this is intended to be a definition for (.), but that is already included in the Prelude, so you need not define it again.
class Functor f where
fmap :: (a -> b) -> f a -> f b
This definition is also provided for you in the Prelude.
class Functor g where
fmap :: (a -> b) -> f a -> f b
But then you define the class again, but here it has mangled the signature of fmap, which would have to be
fmap :: (a -> b) -> g a -> g b
But as you have another definition of Functor right above (and the Prelude has still another, you couldn't get that to compile)
Finally, your
instance Functor F where
fmap id = id
fmap (f . g) = fmap f . fmap g
makes up a name F for a type that you want to make into an instance of Functor, and then tries to give the laws as an implementation, which isn't how it works.
Let us take an example of how it should work.
Consider a very simple functor:
data Pair a = Pair a a
instance Functor Pair where
fmap f (Pair a b) = Pair (f a) (f b)
now, to prove fmap id = id, let us consider what fmap id and id do pointwise:
fmap id (Pair a b) = -- by definition
Pair (id a) (id b) = -- by beta reduction
Pair a (id b) = -- by beta reduction
Pair a b
id (Pair a b) = -- by definition
Pair a b
So, fmap id = id in this particular case.
Then you can check (though technically given the above, you don't have to) that fmap f . fmap g = fmap (f . g)
(fmap f . fmap g) (Pair a b) = -- definition of (.)
fmap f (fmap g (Pair a b)) = -- definition of fmap
fmap f (Pair (g a) (g b)) = -- definition of fmap
Pair (f (g a)) (f (g b))
fmap (f . g) (Pair a b) = -- definition of fmap
Pair ((f . g) a) ((f . g) b) = -- definition of (.)
Pair (f (g a)) ((f . g) b) = -- definition of (.)
Pair (f (g a)) (f (g b))
so fmap f . fmap g = fmap (f . g)
Now, you can make function composition into a functor.
class Functor f where
fmap :: (a -> b) -> f a -> f b
by partially applying the function arrow constructor.
Note that a -> b and (->) a b mean the same thing, so when we say
instance Functor ((->) e) where
the signature of fmap specializes to
fmap {- for (->) e -} :: (a -> b) -> (->) e a -> (->) e b
which once you have flipped the arrows around looks like
fmap {- for (->) e -} :: (a -> b) -> (e -> a) -> e -> b
but this is just the signature for function composition!
So
instance Functor ((->)e) where
fmap f g x = f (g x)
is a perfectly reasonable definition, or even
instance Functor ((->)e) where
fmap = (.)
and it actually shows up in Control.Monad.Instances.
So all you need to use it is
import Control.Monad.Instances
and you don't need to write any code to support this at all and you can use fmap as function composition as a special case, so for instance
fmap (+1) (*2) 3 =
((+1) . (*2)) 3 =
((+1) ((*2) 3)) =
((+1) (3 * 2)) =
3 * 2 + 1 =
7
Since . is not a data constructor you cannot use it for pattern matching I believe. As far as I can tell there isn't an easy way to do what you're trying, although I'm pretty new to Haskell as well.
let is not used for top-level bindings, just do:
f . g = \x -> f (g x)
But the complaint, as cobbal said, is about fmap (f . g), which isn't valid. Actually, that whole class Functor F where is screwy. The class is already declared, now I think you want to make and instance:
instance Functor F where
fmap SomeConstructorForF = ...
fmap OtherConstructorForF = ...
etc.