What is this thing similar to KleisliFunctor? - haskell

Here is how we can define KleisliFunctor:
class (Monad m, Functor f) => KleisliFunctor m f where
kmap :: (a -> m b) -> f a -> f b
kmap f = kjoin . fmap f
kjoin :: f (m a) -> f a
kjoin = kmap id
Does this type class
class (Functor f, Monad m) => Absorb f m where
(>>~) :: f a -> (a -> m b) -> m b
a >>~ f = ajoin $ fmap f a
ajoin :: f (m a) -> m a
ajoin a = a >>~ id
fit somewhere into category theory? What are the laws? Are they
a >>~ g . f === fmap f a >>~ g
a >>~ (f >=> g) === a >>~ f >>= g
?

This is a speculative answer. Proceed with caution.
Let's first consider KleisliFunctor, focusing on the bind-like arrow mapping:
class (Monad m, Functor f) => KleisliFunctor m f where
kmap :: (a -> m b) -> f a -> f b
For this to actually be a functor from the Kleisli category of m to Hask, kmap has to follow the relevant functor laws:
-- Mapping the identity gives identity (in the other category).
kmap return = id
-- Mapping a composed arrow gives a composed arrow (in the other category).
kmap (g <=< f) = kmap g . kmap f
The fact that there are two Functors involved makes things a little unusual, but not unreasonable -- for instance, the laws do hold for mapMaybe, which is the first concrete example the KleisliFunctor post alludes to.
As for Absorb, I will flip the bind-like method for the sake of clarity:
class (Functor f, Monad m) => Absorb f m where
(~<<) :: (a -> m b) -> f a -> m b
If we are looking for something analogous to KleisliFunctor, a question that immediately arises is which category would have functions of type f a -> m b as arrows. It certainly cannot be Hask, as its identity (of type f a -> m a) cannot be id. We would have to figure out not only identity but also composition. For something that is not entirely unlike Monad...
idAbsorb :: f a -> m a
compAbsorb :: (f b -> m c) -> (f a -> m b) -> (f a -> m c)
... the only plausible thing I can think of right now is having a monad morphism as idAbsorb and using a second monad morphism in the opposite direction (that is, from m to f) so that compAbsorb can be implemented by applying the first function, then going back to f and finally applying the second function. We would need to work that out in order to see if my assumptions are appropriate, if this approach works, and if it leads to something useful for your purposes.

Related

Is there a name for this higher-level "bi" version of distribute in Haskell?

I have a Bitraversable called t that supports this operation:
someName :: Monad m => (t (m a) (m b) -> c) -> m (t a b) -> c
In other words, it's possible to take a function that accepts two monads packaged into the bitraversable and turn it into a mapping that accepts a single monad containing a bitraversable without the monad layer. This is something like a bitraversable and higher-level version of distribute; the type signature is similar to this:
\f -> \x -> f (distribute x)
:: (Distributive g, Functor f) => (g (f a) -> c) -> f (g a) -> c
My questions:
Is there a standard name for this "higher-level" version of distribute that works on functions that accept distributives rather than distributives themselves?
Is there a name for the bitraversable version?
Does it work with every bitraversable/functor/monad/whatever, or are there restrictions?
As per #Noughtmare, your "higher level" functions someName and distribute are just written in continuation passing style. These generally aren't worth additional names, because they are just right function compositions:
highLevelDistribute = (. distribute)
Practically speaking, anywhere you want to call highLevelDistribute on an argument:
highLevelDistribute f
this expression is equivalent to:
f . distribute
and even if you're using highLevelDistribute as a first-class value, it's just not that hard to write and understand the section (. distribute).
Note that traverse and sequenceA are a little different, since we have:
sequenceA = traverse id
You could make an argument that this difference doesn't really warrant separate names either, but that's an argument for another day.
Getting back to someName, it's a CPS version of:
someOtherName :: m (t a b) -> t (m a) (m b)
which looks like a bifunctor analogue of distribute:
distribute :: (Distributive g, Functor f) => f (g a) -> g (f a)
So, I'd suggest inventing a Bidistributive to reflect this, and someOtherName becomes bidistribute:
class Bifunctor g => Bidistributive g where
{-# MINIMAL bidistribute | bicollect #-}
bidistribute :: Functor f => f (g a b) -> g (f a) (f b)
bidistribute = bicollect id
bicollect :: Functor f => (a -> g b c) -> f a -> g (f b) (f c)
bicollect f = bidistribute . fmap f
Again, your "higher level" someName is just right-composition:
someName = (. bidistribute)
Reasonable laws for a Bidistributive would probably include the following. I'm not sure if these are sufficiently general and/or exhaustive:
-- naturality
bimap (fmap f) (fmap g) . bidistribute = bidistribute . fmap (bimap f g)
-- identity
bidistribute . Identity = bimap Identity Identity
-- composition
bimap Compose Compose . bidistribute . fmap bidistribute = bidistribute . Compose
For your question #3, not all Bitraversables are Bidistributive, for much the same reason that not all Traversables are Distributive. A Distributive allows you to "expose structure" under an arbitrary functor. So, for example, there's no Distributive instance for lists, because if there was, you could call:
distribute :: IO [a] -> [IO a]
which would allow you to determine if a list returned by an IO action was empty or not, without executing the IO action.
Similarly, Either is Bitraversable, but it can't be Bidistributive, because if it was, you'd be able to use:
bidistribute :: IO (Either a b) -> Either (IO a) (IO b)
to determine if the IO action returned a Left or Right without having to execute the IO action.
One interesting thing about bidistribute is that the "other functor" can be any Functor; it doesn't need to be an Applicative. So, just as we have:
sequenceA :: (Traversable t, Applicative f) => t (f a) -> f (t a)
distribute :: (Distributive g, Functor f) => f (g a) -> g (f a)
we have:
bisequence :: (Bitraversable t, Applicative f) => t (f a) (f b) -> f (t a b)
bidistribute :: (Bidistributive g, Functor f) => f (g a b) -> g (f a) (f b)
Intuitively, sequencing needs the power of an applicative functor f to be able to "build" the f (t a) from a traversal of its functorial f a "parts", while distribution only needs to take the f (g a) apart. In practical terms, this means that sequencing typically looks like this:
-- specialized to t ~ []
sequenceA :: [f a] -> f [a]
sequenceA (f:fs) = (:) <$> f <*> fs -- need applicative operations
while distribution typically looks like this:
-- specialized to g ~ (->) r
distribute :: f (r -> a) -> (r -> f a)
distribute f r = fmap ($ r) f -- only need fmap
(Technically, according to the documentation for Data.Distributive, the Distributive class only requires a Functor rather than some coapplicative class because of the lack of non-trivial comonoids in Haskell. See this SO answer.)

How to Factorize Continuation Monad into Left & Right Adjoints?

As State monad can be factorized into Product (Left - Functor) and Reader (Right - Representable).
Is there a way to factorize Continuation Monad? Below code is my attempt, which wont type check
-- To form a -> (a -> k) -> k
{-# LANGUAGE MultiParamTypeClasses, TypeOperators, InstanceSigs, TypeSynonymInstances #-}
type (<-:) o i = i -> o
-- I Dont think we can have Functor & Representable for this type synonym
class Isomorphism a b where
from :: a -> b
to :: b -> a
instance Adjunction ((<-:) e) ((<-:) e) where
unit :: a -> (a -> e) -> e
unit a handler = handler a
counit :: (a -> e) -> e -> a
counit f e = undefined -- If we have a constraint on Isomorphism a e then we can implement this
Is there a list of Left & Rights Adjoints that form monads?
I have read that, given a pair of adjoints, they form a unique Monad & Comonad but, given a Monad, it can be Factorized into multiple Factors. Is there any example of this?
This doesn't typecheck because the class Adjunction only represents a small subset of adjunctions, where both functors are endofunctors on Hask.
As it turns out, this is not the case for the adjunction (<-:) r -| (<-:) r. There are two subtly different functors here:
f = (<-:) r, the functor from Hask to Op(Hask) (the opposite category of Hask, sometimes also denoted Hask^op)
g = (<-:) r, the functor from Op(Hask) to Hask
In particular, the counit should be a natural transformation in the Op(Hask) category, which flips arrows around:
unit :: a -> g (f a)
counit :: f (g a) <-: a
In fact, counit coincides with unit in this adjunction.
To capture this properly, we need to generalize the Functor and Adjunction classes so we can model adjunctions between different categories:
class Exofunctor c d f where
exomap :: c a b -> d (f a) (f b)
class
(Exofunctor d c f, Exofunctor c d g) =>
Adjunction
(c :: k -> k -> Type)
(d :: h -> h -> Type)
(f :: h -> k)
(g :: k -> h) where
unit :: d a (g (f a))
counit :: c (f (g a)) a
Then we get again that Compose is a monad (and a comonad if we flip the adjunction):
newtype Compose f g a = Compose { unCompose :: f (g a) }
adjReturn :: forall c f g a. Adjunction c (->) f g => a -> Compose g f a
adjReturn = Compose . unit #_ #_ #c #(->)
adjJoin :: forall c f g a. Adjunction c (->) f g => Compose g f (Compose g f a) -> Compose g f a
adjJoin = Compose . exomap (counit #_ #_ #c #(->)) . (exomap . exomap #(->) #c) unCompose . unCompose
and Cont is merely a special case of that:
type Cont r = Compose ((<-:) r) ((<-:) r)
See also this gist for more details: https://gist.github.com/Lysxia/beb6f9df9777bbf56fe5b42de04e6c64
I have read that given a pair of adjoints they form a unique Monad & Comonad but given a Monad it can be Factorized into multiple Factors. Is there any example of this?
The factorization is generally not unique. Once you've generalized adjunctions as above, then you can at least factor any monad M as an adjunction between its Kleisli category and its base category (in this case, Hask).
Every monad M defines an adjunction
F -| G
where
F : (->) -> Kleisli M
: Type -> Type -- Types are the objects of both categories (->) and Kleisli m.
-- The left adjoint F maps each object to itself.
: (a -> b) -> (a -> M b) -- The morphism mapping uses return.
G : Kleisli M -> (->)
: Type -> Type -- The right adjoint G maps each object a to m a
: (a -> M b) -> (M a -> M b) -- This is (=<<)
I don't know whether the continuation monad corresponds to an adjunction between endofunctors on Hask.
See also the nCatLab article on monads: https://ncatlab.org/nlab/show/monad#RelationToAdjunctionsAndMonadicity
Relation to adjunctions and monadicity
Every adjunction (L ⊣ R) induces a monad R∘L and a comonad L∘R. There is in general more than one adjunction which gives rise to a given monad this way, in fact there is a category of adjunctions for a given monad. The initial object in that category is the adjunction over the Kleisli category of the monad and the terminal object is that over the Eilenberg-Moore category of algebras. (e.g. Borceux, vol. 2, prop. 4.2.2) The latter is called the monadic adjunction.

Applicative laws for alternative class formulations

A well-known alternative formulation of Applicative (see, e.g., Typeclassopedia) is
class Functor f => Monoidal f where
unit :: f ()
pair :: f a -> f b -> f (a, b)
This leads to laws that look more like typical identity and associativity laws than what you get from Applicative, but only when you work through pair-reassociating isomorphisms. Thinking about this a few weeks ago, I came up with two other formulations that avoid this problem.
class Functor f => Fapplicative f where
funit :: f (a -> a)
fcomp :: f (b -> c) -> f (a -> b) -> f (a -> c)
class Functor f => Capplicative f where
cunit :: Category (~>) => f (a ~> a)
ccomp :: Category (~>) => f (b ~> c) -> f (a ~> b) -> f (a ~> c)
It's easy to implement Capplicative using Applicative, Fapplicative using Capplicative, and Applicative using Fapplicative, so these all have equivalent power.
The identity and associativity laws are entirely obvious. But Monoidal needs a naturality law, and these must as well. How might I formulate them? Also: Capplicative seems to suggest an immediate generalization:
class (Category (~>), Functor f) => Appish (~>) f where
unit1 :: f (a ~> a)
comp1 :: f (b ~> c) -> f (a ~> b) -> f (a ~> c)
I am a bit curious about whether this (or something similar) is good for something.
This is a really neat idea!
I think the free theorem for fcomp is
fcomp (fmap (post .) u) (fmap (. pre) v) = fmap (\f -> post . f . pre) (fcomp u v)

Why there is no `Cofunctor` typeclass in Haskell?

Monads get fmap from Functor typeclass. Why comonads don't need a cofmap method defined in a Cofunctor class?
Functor is defined as:
class Functor f where
fmap :: (a -> b) -> (f a -> f b)
Cofunctor could be defined as follows:
class Cofunctor f where
cofmap :: (b -> a) -> (f b -> f a)
So, both are technically the same, and that's why Cofunctor does not exist. "The dual concept of 'functor in general' is still 'functor in general'".
Since Functor and Cofunctor are the same, both monads and comonads are defined by using Functor. But don't let that make you think that monads and comonads are the same thing, they're not.
A monad is defined (simplifying) as:
class Functor m => Monad where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
whether a comonad (again, simplified) is:
class Functor w => Comonad where
extract :: w a -> a
extend :: (w a -> b) -> w a -> w b
Note the "symmetry".
Another thing is a contravariant functor, defined as:
import Data.Functor.Contravariant
class Contravariant f where
contramap :: (b -> a) -> (f a -> f b)
For reference,
class Functor w => Comonad w where
extract :: w a -> a
duplicate :: w a -> w (w a)
extend :: (w a -> b) -> w a -> w b
instance Applicative m => Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
join :: Monad m => m (m a) -> m a
Note that given extract and extend you can produce fmap and duplicate, and that given return and >>= you can produce fmap, pure, <*>, and join. So we can focus on just pure+>>= and extract+extend.
I imagine you might be looking for something like
class InverseFunctor f where
unmap :: (f a -> f b) -> a -> b
Since the Monad class makes it easy to "put things in" while only allowing a sort of hypothetical approach to "taking things out", and Comonad does something opposed to that, your request initially sounds sensible. However, there is a significant asymmetry between >>= and extend that will get in the way of any attempt to define unmap. Note in particular that the first argument of >>= has type m a. The second argument of extend has type w a—not a.
Actually, you're wrong: there is one!
https://hackage.haskell.org/package/acme-cofunctor

What is the general case of QuickCheck's promote function?

What is the general term for a functor with a structure resembling QuickCheck's promote function, i.e., a function of the form:
promote :: (a -> f b) -> f (a -> b)
(this is the inverse of flip $ fmap (flip ($)) :: f (a -> b) -> (a -> f b)). Are there even any functors with such an operation, other than (->) r and Id? (I'm sure there must be). Googling 'quickcheck promote' only turned up the QuickCheck documentation, which doesn't give promote in any more general context AFAICS; searching SO for 'quickcheck promote' produces no results.
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
(=<<) :: Monad m => (a -> m b) -> m a -> m b
Given that Monad is more powerful an interface than Applicative, this tell us that a -> f b can do more things than f (a -> b). This tells us that a function of type (a -> f b) -> f (a -> b) can't be injective. The domain is bigger than the codomain, in a handwavey manner. This means there's no way you can possibly preserve behavior of the function. It just doesn't work out across generic functors.
You can, of course, characterize functors in which that operation is injective. Identity and (->) a are certainly examples. I'm willing to bet there are more examples, but nothing jumps out at me immediately.
So far I found these ways of constructing an f with the promote morphism:
f = Identity
if f and g both have promote then the pair functor h t = (f t, g t) also does
if f and g both have promote then the composition h t = f (g t) also does
if f has the promote property and g is any contrafunctor then the functor h t = g t -> f t has the promote property
The last property can be generalized to profunctors g, but then f will be merely a profunctor, so it's probably not very useful, unless you only require profunctors.
Now, using these four constructions, we can find many examples of functors f for which promote exists:
f t = (t,t)
f t = (t, b -> t)
f t = (t -> a) -> t
f t = ((t,t) -> b) -> (t,t,t)
f t = ((t, t, c -> t, (t -> b) -> t) -> a) -> t
Also note that the promote property implies that f is pointed.
point :: t -> f t
point x = fmap (const x) (promote id)
Essentially the same question: Is this property of a functor stronger than a monad?
Data.Distributive has
class Functor g => Distributive g where
distribute :: Functor f => f (g a) -> g (f a)
-- other non-critical methods
Renaming your variables, you get
promote :: (c -> g a) -> g (c -> a)
Using slightly invalid syntax for clarity,
promote :: ((c ->) (g a)) -> g ((c ->) a)
(c ->) is a Functor, so the type of promote is a special case of the type of distribute. Thus every Distributive functor supports your promote. I don't know if any support promote but not Distributive.

Resources