What is the general term for a functor with a structure resembling QuickCheck's promote function, i.e., a function of the form:
promote :: (a -> f b) -> f (a -> b)
(this is the inverse of flip $ fmap (flip ($)) :: f (a -> b) -> (a -> f b)). Are there even any functors with such an operation, other than (->) r and Id? (I'm sure there must be). Googling 'quickcheck promote' only turned up the QuickCheck documentation, which doesn't give promote in any more general context AFAICS; searching SO for 'quickcheck promote' produces no results.
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
(=<<) :: Monad m => (a -> m b) -> m a -> m b
Given that Monad is more powerful an interface than Applicative, this tell us that a -> f b can do more things than f (a -> b). This tells us that a function of type (a -> f b) -> f (a -> b) can't be injective. The domain is bigger than the codomain, in a handwavey manner. This means there's no way you can possibly preserve behavior of the function. It just doesn't work out across generic functors.
You can, of course, characterize functors in which that operation is injective. Identity and (->) a are certainly examples. I'm willing to bet there are more examples, but nothing jumps out at me immediately.
So far I found these ways of constructing an f with the promote morphism:
f = Identity
if f and g both have promote then the pair functor h t = (f t, g t) also does
if f and g both have promote then the composition h t = f (g t) also does
if f has the promote property and g is any contrafunctor then the functor h t = g t -> f t has the promote property
The last property can be generalized to profunctors g, but then f will be merely a profunctor, so it's probably not very useful, unless you only require profunctors.
Now, using these four constructions, we can find many examples of functors f for which promote exists:
f t = (t,t)
f t = (t, b -> t)
f t = (t -> a) -> t
f t = ((t,t) -> b) -> (t,t,t)
f t = ((t, t, c -> t, (t -> b) -> t) -> a) -> t
Also note that the promote property implies that f is pointed.
point :: t -> f t
point x = fmap (const x) (promote id)
Essentially the same question: Is this property of a functor stronger than a monad?
Data.Distributive has
class Functor g => Distributive g where
distribute :: Functor f => f (g a) -> g (f a)
-- other non-critical methods
Renaming your variables, you get
promote :: (c -> g a) -> g (c -> a)
Using slightly invalid syntax for clarity,
promote :: ((c ->) (g a)) -> g ((c ->) a)
(c ->) is a Functor, so the type of promote is a special case of the type of distribute. Thus every Distributive functor supports your promote. I don't know if any support promote but not Distributive.
Related
I have a Bitraversable called t that supports this operation:
someName :: Monad m => (t (m a) (m b) -> c) -> m (t a b) -> c
In other words, it's possible to take a function that accepts two monads packaged into the bitraversable and turn it into a mapping that accepts a single monad containing a bitraversable without the monad layer. This is something like a bitraversable and higher-level version of distribute; the type signature is similar to this:
\f -> \x -> f (distribute x)
:: (Distributive g, Functor f) => (g (f a) -> c) -> f (g a) -> c
My questions:
Is there a standard name for this "higher-level" version of distribute that works on functions that accept distributives rather than distributives themselves?
Is there a name for the bitraversable version?
Does it work with every bitraversable/functor/monad/whatever, or are there restrictions?
As per #Noughtmare, your "higher level" functions someName and distribute are just written in continuation passing style. These generally aren't worth additional names, because they are just right function compositions:
highLevelDistribute = (. distribute)
Practically speaking, anywhere you want to call highLevelDistribute on an argument:
highLevelDistribute f
this expression is equivalent to:
f . distribute
and even if you're using highLevelDistribute as a first-class value, it's just not that hard to write and understand the section (. distribute).
Note that traverse and sequenceA are a little different, since we have:
sequenceA = traverse id
You could make an argument that this difference doesn't really warrant separate names either, but that's an argument for another day.
Getting back to someName, it's a CPS version of:
someOtherName :: m (t a b) -> t (m a) (m b)
which looks like a bifunctor analogue of distribute:
distribute :: (Distributive g, Functor f) => f (g a) -> g (f a)
So, I'd suggest inventing a Bidistributive to reflect this, and someOtherName becomes bidistribute:
class Bifunctor g => Bidistributive g where
{-# MINIMAL bidistribute | bicollect #-}
bidistribute :: Functor f => f (g a b) -> g (f a) (f b)
bidistribute = bicollect id
bicollect :: Functor f => (a -> g b c) -> f a -> g (f b) (f c)
bicollect f = bidistribute . fmap f
Again, your "higher level" someName is just right-composition:
someName = (. bidistribute)
Reasonable laws for a Bidistributive would probably include the following. I'm not sure if these are sufficiently general and/or exhaustive:
-- naturality
bimap (fmap f) (fmap g) . bidistribute = bidistribute . fmap (bimap f g)
-- identity
bidistribute . Identity = bimap Identity Identity
-- composition
bimap Compose Compose . bidistribute . fmap bidistribute = bidistribute . Compose
For your question #3, not all Bitraversables are Bidistributive, for much the same reason that not all Traversables are Distributive. A Distributive allows you to "expose structure" under an arbitrary functor. So, for example, there's no Distributive instance for lists, because if there was, you could call:
distribute :: IO [a] -> [IO a]
which would allow you to determine if a list returned by an IO action was empty or not, without executing the IO action.
Similarly, Either is Bitraversable, but it can't be Bidistributive, because if it was, you'd be able to use:
bidistribute :: IO (Either a b) -> Either (IO a) (IO b)
to determine if the IO action returned a Left or Right without having to execute the IO action.
One interesting thing about bidistribute is that the "other functor" can be any Functor; it doesn't need to be an Applicative. So, just as we have:
sequenceA :: (Traversable t, Applicative f) => t (f a) -> f (t a)
distribute :: (Distributive g, Functor f) => f (g a) -> g (f a)
we have:
bisequence :: (Bitraversable t, Applicative f) => t (f a) (f b) -> f (t a b)
bidistribute :: (Bidistributive g, Functor f) => f (g a b) -> g (f a) (f b)
Intuitively, sequencing needs the power of an applicative functor f to be able to "build" the f (t a) from a traversal of its functorial f a "parts", while distribution only needs to take the f (g a) apart. In practical terms, this means that sequencing typically looks like this:
-- specialized to t ~ []
sequenceA :: [f a] -> f [a]
sequenceA (f:fs) = (:) <$> f <*> fs -- need applicative operations
while distribution typically looks like this:
-- specialized to g ~ (->) r
distribute :: f (r -> a) -> (r -> f a)
distribute f r = fmap ($ r) f -- only need fmap
(Technically, according to the documentation for Data.Distributive, the Distributive class only requires a Functor rather than some coapplicative class because of the lack of non-trivial comonoids in Haskell. See this SO answer.)
I am learning from the "Free Applicative Functors". Surely, the question I am going to ask is kind of aside with respect to main idea of the paper, but still...
...on the page 6 there is an attempt to generalize Functor to MultiFunctor:
class Functor f ⇒ MultiFunctor f where
fmap0 :: a → f a
fmap1 :: (a → b) → f a → f b
fmap1 = fmap
fmap2 :: (a → b → c) → f a → f b → f c
...
I can not see how this definition is justified from the category theory's viewpoint: fmap2 seems to be just a bifunctor, i.e. a functor defined on a product category. By definition, product category is given by all possible ordered pairs of objects and morphisms are pairs as well, hence: fmap2 :: (a -> a', b -> b') -> (f a, f b) -> (f a', f b') looks and feels like more appropriate signature.
I can understand the way of thinking standing behing the (a -> b -> c) -> f a -> f b -> f c choice: it is just the most obvious way to take known (a -> b) -> f a -> f b signature and force it to work with binary functions, rather then unary. But isMultiFunctor (given by the definition above) actually a bi-/multifunctor in the sense that category theory expects it to be?
P.S. The reason why I am curious is that it seems like one can't get to the Applicative by generalizing Functor, though paper states that one can.
I think the category theory angle you are taking is wrong. There is a Bifunctor class (with a map of type (a -> b) -> (c -> d) -> f a c -> f b d) but that is not what this generalisation is. If one uncurries some functions then the signature of fmap2 looks like:
fmap2 :: ((a,b) -> c) -> (f a, f b) -> f c
And by considering fmap2 id, we see that what we are implementing is not a bifunctor but a cartesian functor (i.e. a monoidal functor between cartesian categories), with fmap2 id :: (f a, f b) -> f (a,b) being the natural transformation:
One can then get an applicative from this Multifunctor generalisation. Just change pure for fmap0 and (<*>) for fmap2 ($).
Let's start with the obvious: fmap0 is pure.
Here's one you made a mistake on: fmap2 is liftA2.
(bimap is very different - (a -> b) -> (c -> d) -> f a b -> f c d)
And if you go back to the definition of Applicative, you see that it has a default implementation of (<*>), which is liftA2 id, which allows you to define it in terms of pure and either liftA2 or (<*>).
So yes, that class is equivalent to Applicative.
A well-known alternative formulation of Applicative (see, e.g., Typeclassopedia) is
class Functor f => Monoidal f where
unit :: f ()
pair :: f a -> f b -> f (a, b)
This leads to laws that look more like typical identity and associativity laws than what you get from Applicative, but only when you work through pair-reassociating isomorphisms. Thinking about this a few weeks ago, I came up with two other formulations that avoid this problem.
class Functor f => Fapplicative f where
funit :: f (a -> a)
fcomp :: f (b -> c) -> f (a -> b) -> f (a -> c)
class Functor f => Capplicative f where
cunit :: Category (~>) => f (a ~> a)
ccomp :: Category (~>) => f (b ~> c) -> f (a ~> b) -> f (a ~> c)
It's easy to implement Capplicative using Applicative, Fapplicative using Capplicative, and Applicative using Fapplicative, so these all have equivalent power.
The identity and associativity laws are entirely obvious. But Monoidal needs a naturality law, and these must as well. How might I formulate them? Also: Capplicative seems to suggest an immediate generalization:
class (Category (~>), Functor f) => Appish (~>) f where
unit1 :: f (a ~> a)
comp1 :: f (b ~> c) -> f (a ~> b) -> f (a ~> c)
I am a bit curious about whether this (or something similar) is good for something.
This is a really neat idea!
I think the free theorem for fcomp is
fcomp (fmap (post .) u) (fmap (. pre) v) = fmap (\f -> post . f . pre) (fcomp u v)
Here is how we can define KleisliFunctor:
class (Monad m, Functor f) => KleisliFunctor m f where
kmap :: (a -> m b) -> f a -> f b
kmap f = kjoin . fmap f
kjoin :: f (m a) -> f a
kjoin = kmap id
Does this type class
class (Functor f, Monad m) => Absorb f m where
(>>~) :: f a -> (a -> m b) -> m b
a >>~ f = ajoin $ fmap f a
ajoin :: f (m a) -> m a
ajoin a = a >>~ id
fit somewhere into category theory? What are the laws? Are they
a >>~ g . f === fmap f a >>~ g
a >>~ (f >=> g) === a >>~ f >>= g
?
This is a speculative answer. Proceed with caution.
Let's first consider KleisliFunctor, focusing on the bind-like arrow mapping:
class (Monad m, Functor f) => KleisliFunctor m f where
kmap :: (a -> m b) -> f a -> f b
For this to actually be a functor from the Kleisli category of m to Hask, kmap has to follow the relevant functor laws:
-- Mapping the identity gives identity (in the other category).
kmap return = id
-- Mapping a composed arrow gives a composed arrow (in the other category).
kmap (g <=< f) = kmap g . kmap f
The fact that there are two Functors involved makes things a little unusual, but not unreasonable -- for instance, the laws do hold for mapMaybe, which is the first concrete example the KleisliFunctor post alludes to.
As for Absorb, I will flip the bind-like method for the sake of clarity:
class (Functor f, Monad m) => Absorb f m where
(~<<) :: (a -> m b) -> f a -> m b
If we are looking for something analogous to KleisliFunctor, a question that immediately arises is which category would have functions of type f a -> m b as arrows. It certainly cannot be Hask, as its identity (of type f a -> m a) cannot be id. We would have to figure out not only identity but also composition. For something that is not entirely unlike Monad...
idAbsorb :: f a -> m a
compAbsorb :: (f b -> m c) -> (f a -> m b) -> (f a -> m c)
... the only plausible thing I can think of right now is having a monad morphism as idAbsorb and using a second monad morphism in the opposite direction (that is, from m to f) so that compAbsorb can be implemented by applying the first function, then going back to f and finally applying the second function. We would need to work that out in order to see if my assumptions are appropriate, if this approach works, and if it leads to something useful for your purposes.
When use Data.Traversable I frequently requires some code like
import Control.Applicative (Applicative,(<*>),pure)
import Data.Traversable (Traversable,traverse,sequenceA)
import Control.Monad.State (state,runState)
traverseF :: Traversable t => ((a,s) -> (b,s)) -> (t a, s) -> (t b, s)
traverseF f (t,s) = runState (traverse (state.curry f) t) s
to traverse the structure and build up a new one driven by some state. And I notice the type signature pattern and believe it could be able to generalized as
fmapInner :: (Applicative f,Traversable t) => (f a -> f b) -> f (t a) -> f (t b)
fmapInner f t = ???
But I fail to implement this with just traverse, sequenceA, fmap, <*> and pure. Maybe I need stronger type class constrain? Do I absolutely need a Monad here?
UPDATE
Specifically, I want to know if I can define fmapInner for a f that work for any Traversable t and some laws for intuition applied (I don't know what the laws should be yet), is it imply that the f thing is a Monad? Since, for Monads the implementation is trivial:
--Monad m implies Applicative m but we still
-- have to say it unless we use mapM instead
fmapInner :: (Monad m,Traversable t) => (m a -> m b) -> m (t a) -> m (t b)
fmapInner f t = t >>= Data.Traversable.mapM (\a -> f (return a))
UPDATE
Thanks for the excellent answer. I have found that my traverseF is just
import Data.Traversable (mapAccumL)
traverseF1 :: Traversable t => ((a, b) -> (a, c)) -> (a, t b) -> (a, t c)
traverseF1 =uncurry.mapAccumL.curry
without using Monad.State explicitly and have all pairs flipped. Previously I though it was mapAccumR but it is actually mapAccumL that works like traverseF.
I've now convinced myself that this is impossible. Here's why,
tF ::(Applicative f, Traversable t) => (f a -> f b) -> f (t a) -> f (t b)
So we have this side-effecting computation that returns t a and we want to use this to determine what side effects happen. In other words, the value of type t a will determine what side effects happen when we apply traverse.
However this isn't possible possible with the applicative type class. We can dynamically choose values, but the side effects of out computations are static. To see what I mean,
pure :: a -> f a -- No side effects
(<*>) :: f (a -> b) -> f a -> f b -- The side effects of `f a` can't
-- decide based on `f (a -> b)`.
Now there are two conceivable ways to determine side effects at depending on previous values,
smash :: f (f a) -> f a
Because then we can simply do
smash $ (f :: a -> f a) <$> (fa :: f a) :: f a
Now your function becomes
traverseF f t = smash $ traverse (f . pure) <$> t
Or we can have
bind :: m a -> (a -> m b) -> m b -- and it's obvious how `a -> m b`
-- can choose side effects.
and
traverseF f t = bind t (traverse $ f . pure)
But these are join and >>= respectively and are members of the Monad typeclass. So yes, you need a monad. :(
Also, a nice, pointfree implementation of your function with monad constraints is
traverseM = (=<<) . mapM . (.return)
Edit,
I suppose it's worth noting that
traverseF :: (Applicative f,Traversable t) => (f a -> f b) -> t a -> f (t a)
traverseF = traverse . (.pure)