According to the Typeclassopedia (among other sources), Applicative logically belongs between Monad and Pointed (and thus Functor) in the type class hierarchy, so we would ideally have something like this if the Haskell prelude were written today:
class Functor f where
fmap :: (a -> b) -> f a -> f b
class Functor f => Pointed f where
pure :: a -> f a
class Pointed f => Applicative f where
(<*>) :: f (a -> b) -> f a -> f b
class Applicative m => Monad m where
-- either the traditional bind operation
(>>=) :: (m a) -> (a -> m b) -> m b
-- or the join operation, which together with fmap is enough
join :: m (m a) -> m a
-- or both with mutual default definitions
f >>= x = join ((fmap f) x)
join x = x >>= id
-- with return replaced by the inherited pure
-- ignoring fail for the purposes of discussion
(Where those default definitions were re-typed by me from the explanation at Wikipedia, errors being my own, but if there are errors it is at least in principle possible.)
As the libraries are currently defined, we have:
liftA :: (Applicative f) => (a -> b) -> f a -> f b
liftM :: (Monad m) => (a -> b) -> m a -> m b
and:
(<*>) :: (Applicative f) => f (a -> b) -> f a -> f b
ap :: (Monad m) => m (a -> b) -> m a -> m b
Note the similarity between these types within each pair.
My question is: are liftM (as distinct from liftA) and ap (as distinct from <*>), simply a result of the historical reality that Monad wasn't designed with Pointed and Applicative in mind? Or are they in some other behavioral way (potentially, for some legal Monad definitions) distinct from the versions that only require an Applicative context?
If they are distinct, could you provide a simple set of definitions (obeying the laws required of Monad, Applicative, Pointed, and Functor definitions described in the Typeclassopedia and elsewhere but not enforced by the type system) for which liftA and liftM behave differently?
Alternatively, if they are not distinct, could you prove their equivalence using those same laws as premises?
liftA, liftM, fmap, and . should all be the same function, and they must be if they satisfy the functor law:
fmap id = id
However, this is not checked by Haskell.
Now for Applicative. It's possible for ap and <*> to be distinct for some functors simply because there could be more than one implementation that satisfies the types and the laws. For example, List has more than one possible Applicative instance. You could declare an applicative as follows:
instance Applicative [] where
(f:fs) <*> (x:xs) = f x : fs <*> xs
_ <*> _ = []
pure = repeat
The ap function would still be defined as liftM2 id, which is the Applicative instance that comes for free with every Monad. But here you have an example of a type constructor having more than one Applicative instance, both of which satisfy the laws. But if your monads and your applicative functors disagree, it's considered good form to have different types for them. For example, the Applicative instance above does not agree with the monad for [], so you should really say newtype ZipList a = ZipList [a] and then make the new instance for ZipList instead of [].
They can differ, but they shouldn't.
They can differ because they can have different implementations: one is defined in an instance Applicative while the other is defined in an instance Monad. But if they indeed differ, then I'd say the programmer who wrote those instances wrote misleading code.
You are right: the functions exist as they do for historical reasons. People have strong ideas about how things should have been.
Related
I've been on a bit of a "distilling everything to its fundamentals" kick lately, and I've been unable to find clear theoretical reasons for how the Traversable typeclass is defined, only practical ones of "it's useful to be able to traverse over applicative coalgebras, and lots of datatypes can do it" and a whole lot of hints.
I'm aware that there's an applicative "family", as described by https://duplode.github.io/posts/divisible-and-the-monoidal-quartet.html.
I'm also aware that while Traversable traversals are applicative coalgebras, the Traversable1 typeclass from 'semigroupoids' describes apply coalgebras, and the Distributive typeclass from 'distributive' describes functor algebras.
Additionally, I'm aware that Foldable, Foldable1, and theoretical fold family members, describe datatypes that can be folded using monoids, semigroups, and corresponding monoid family members such as magmas (for folding as a binary tree) and commutative versions of each (for folding as unordered versions of each).
As such, as Traversable is a subclass of Foldable, I assume it's monoidal in nature, and similarly I assume Traversable1 is semigroupal in nature, and Distributive is comonoidal in nature (as mentioned in its description in the 'distributive' package).
This feels like the right track, but where do Applicative and Apply come from here? Are there magmatic and commutative versions? Would there be a distributive family in a category with non-trivial comonoids?
Essentially, my question is "do these typeclasses exist, and what are they? if not, why not?":
class FoldableMagma t => TraversableMagma t where
traverseMagma :: ??? f => (a -> f b) -> (t a -> f (t b))
class FoldableCommute t => TraversableCommute t where
traverseCommute :: ??? f => (a -> f b) -> (t a -> f (t b))
class Foldable t => ContraTraversable t where
contraTraverse :: Divisible f => (b -> f a) -> (t a -> f (t b))
-- im really not sure on this last one
-- but it's how i'd expect an endofunctor over coalgebras to look
-- which seems potentially related to traversables?
Presumably less important bonus question: while attempting to research this, I came across the 'data-functor-logistic' package https://hackage.haskell.org/package/data-functor-logistic
This describes a version of Distributive over contravariant functors - is there an equivalent Traversable over Divisibles (or Decidables)?
I'm not aware of any library that implements those classes, but I'll try to unravel what those classes would represent. I am a programmer, not a category theorist, so take this with a grain of salt.
Applicative variants
ApplyMagma
The ApplyMagma class has exactly the same methods as the Apply class, but it doesn't need to follow the associativity law.
class Functor f => ApplyMagma f where
(<.>) :: f (a -> b) -> f a -> f b
If Apply is analogous to semigroups, ApplyMagma is analogous to magmas.
ApplyCommute
The ApplyCommute class will be equivalent to the Apply class but with the following commutativity law:
f <$> x <.> y = flip f <$> y <.> x
If Apply is analogous to semigroups, ApplyCommute is analogous to commutative semigroups.
Traversable1 variants
Traversable1Magma
A Traversable1Magma can be seen as a Traversable1 with more information provided about the structure. While the Foldable1 class has a toNonEmpty method, The Foldable1Magma class could have a toBinaryTree method.
class (FoldableMagma t, Traversable1 t) => Traversable1Magma t where
traverseMagma :: ApplyMagma f => (a -> f b) -> (t a -> f (t b))
Traversable1Commute
A Traversable1Commute can be seen as a Traversable1 without a defined ordering to the elements. If it didn't require an Ord a constraint, Set from containers could be an instance of this class. Traversable1Commute could be a superclass of Traversable1.
class (FoldableCommute t, Functor t) => Traversable1Commute t where
traverseCommute :: ApplyCommute f => (a -> f b) -> (t a -> f (t b))
Note that these are variants of Traversable1 because neither ApplyMagma nor ApplyCommute have a function equivalent to pure.
ContraTraversable
ContraTraversable does not have any instances. To see why, look at the type of the contraTraverse function.
contraTraverse :: Divisible f => (b -> f a) -> (t a -> f (t b))
We can specialize this to the following:
contraTraverse :: Monoid b => (b -> Op b a) -> (t a -> Op b (t b))
Which is eqivalent to the following:
contraTraverse ~ Monoid b => (b -> a -> b) -> t a -> t b -> a
Using const and the conquer function from Divisible, this allows us to create a value of any type, which is impossible.
Since asking this and receiving the previous (excellent) answer, I have learnt another reason why Applicative is used: algebraic datatypes!
Without any constraint, it can only describe uninhabited datatypes, like this:
data V1 a
instance VTraversable V1 where
vtraverse _ = \case
-- uninhabited types can be pattern matched to create any result type
If it were Functor, it could describe algebraic datatypes that look something like these:
data FTrav1 a = FTrav1 a
instance FTraversable FTrav1 where
ftraverse strat (FTrav1 a) = FTrav1 <$> strat a
data FTrav2 a = FTrav2_1 a | FTrav2_2 (FTrav1 a)
instance FTraversable FTrav2 where
ftraverse strat (FTrav2_1 a) = FTrav2_1 <$> strat a
ftraverse strat (FTrav2_2 fa) = FTrav2_2 <$> ftraverse strat fa
Essentially, it's any datatype with an arbitrary (potentially infinite, if that were describable in Haskell) number of constructors of a single FTraversable argument (where a ~ Identity a). This is saying that any Traversable f a is isomorphic to (f (), a), the Writer functor.
Introducing Apply enables extra datatypes like the following:
data ApplyTrav1 a = ApplyTrav1 a a
instance Traversable1 ApplyTrav1 where
traverse1 strat (ApplyTrav1 a a) = ApplyTrav1 <$> strat a <*> strat a
data ApplyTrav2 a = ApplyTrav2_1 a (ApplyTrav1 a) | ApplyTrav2_2 (ApplyTrav1 a) a
instance Traversable1 ApplyTrav2 where
traverse1 strat (ApplyTrav2_1 a fa) = ApplyTrav2_1 <$> strat a <*> traverse1 strat fa
traverse1 strat (ApplyTrav2_2 fa a) = ApplyTrav2_2 <$> traverse1 strat fa <*> traverse1 strat a
Now constructors can have arbitrarily many arguments, as long as it's a finite number greater than zero! The isomorphism is now to (f (), NonEmpty a), where they are of equal sizes.
Applicative enables the following:
data ApplicTrav a = ApplicTrav0 | ApplicTrav a a
instance Traversable ApplicTrav where
traverse _ ApplicTrav0 = pure ApplicTrav0
traverse strat (ApplicTrav a a) = ApplicTrav <$> strat a <*> strat a
Now empty constructors are allowed! The isomorphism is now to (f (), [a]).
A hypothetical commutative Applicative would be used for commutative algebraic datatypes, were they a thing - perhaps, if Set enforced that it were only foldable with commutative monoids, they would be relevant! But to my knowledge, commutative datatypes are not a core part of Haskell, and so this form of traversal will not show up for algebraic datatypes.
Distributive is similar, it describes functors with a single constructor of arbitrarily many records (potentially infinite).
Logistic is unrelated to this algebraic interpretation, to my knowledge - since the Reader functor is commonly used with Distributives to create a collection of getter functions, Logistic is designed to work with the Op contravariant functor to create a collection of setter functions.
This suggests to me that the equivalent for Traversable doesn't exist, due to the Writer functor that characterises Traversables being its own opposite ((r,a) is isomorphic to (a,r)).
The Monad typeclass can be defined in terms of return and (>>=). However, if we already have a Functor instance for some type constructor f, then this definition is sort of 'more than we need' in that (>>=) and return could be used to implement fmap so we're not making use of the Functor instance we assumed.
In contrast, defining return and join seems like a more 'minimal'/less redundant way to make f a Monad. This way, the Functor constraint is essential because fmap cannot be written in terms of these operations. (Note join is not necessarily the only minimal way to go from Functor to Monad: I think (>=>) works as well.)
Similarly, Applicative can be defined in terms of pure and (<*>), but this definition again does not take advantage of the Functor constraint since these operations are enough to define fmap.
However, Applicative f can also be defined using unit :: f () and (>*<) :: f a -> f b -> f (a, b). These operations are not enough to define fmap so I would say in some sense this is a more minimal way to go from Functor to Applicative.
Is there a characterization of Monad as fmap, unit, (>*<), and some other operator which is minimal in that none of these functions can be derived from the others?
(>>=) does not work, since it can implement a >*< b = a >>= (\ x -> b >>= \ y -> pure (x, y)) where pure x = fmap (const x) unit.
Nor does join since m >>= k = join (fmap k m) so (>*<) can be implemented as above.
I suspect (>=>) fails similarly.
I have something, I think. It's far from elegant, but maybe it's enough to get you unstuck, at least. I started with join :: m (m a) -> ??? and asked "what could it produce that would require (<*>) to get back to m a?", which I found a fruitful line of thought that probably has more spoils.
If you introduce a new type T which can only be constructed inside the monad:
t :: m T
Then you could define a join-like operation which requires such a T:
joinT :: m (m a) -> m (T -> a)
The only way we can produce the T we need to get to the sweet, sweet a inside is by using t, and then we have to combine that with the result of joinT somehow. There are two basic operations that can combine two ms into one: (<*>) and joinT -- fmap is no help. joinT is not going to work, because we'll just need yet another T to use its result, so (<*>) is the only option, meaning that (<*>) can't be defined in terms of joinT.
You could roll that all up into an existential, if you prefer.
joinT :: (forall t. m t -> (m (m a) -> m (t -> a)) -> r) -> r
I'm doing exercises from the book "Programming in Haskell (2nd Edition)" and I have some problems in understanding the following:
"Given the following type of expressions
data Expr a = Var a | Val Int | Add (Expr a) (Expr a)
deriving Show
that contain variables of some type a, show how to make this type into instances of the Functor, Applicative and Monad classes. With the aid of an example, explain what the >>= operator for this type does."
I found a solution to the first question, which is the same as here: https://github.com/evturn/programming-in-haskell/blob/master/12-monads-and-more/12.05-exercises.hs (ex. 7), that is type correct.
The problem is that I cannot find out the sense of this exercise and the meaning of what this solution actually does.
To understand the solution, you need to get an intuition over a Functor, an Applicative and a Monad.
That being said fmap, <*> and >>= is just a way for one to be able to transform a data within an arbitrary F in your case that's an Expr from a -> b
Take a look at the Type class definitions of Functor, Applicative and Monad for example.
class Functor f where
fmap :: (a -> b) -> f a -> f b
class Functor f => Applicative f where
<*> :: f (a -> b) -> f a -> f b
class Applicative m => Monad m where
>>= :: m a -> (a -> m b) -> m b
Though on the bigger picture, these functions also execute effects of the algebraic datatype that have the type class instances for it.
For example, I will provide the rough definition of the Maybe monad.
instance Monad Maybe where
(Just something) >>= f = f something
Nothing >>= _ = Nothing
In this context, the bind or >>= combinator returns Nothing if there is Nothing for the input else it applies the arbitrary f on something such that f is a transformation from a -> Maybe b which satisfies the definition of the >>= combinator which is m a -> (a -> m b) -> m b where m is the Maybe datatype.
It appears that type classes such as Applicative, Monad and Arrow have some sort of sum type equivalent in type classes such as Alternative, MonadPlus and ArrowPlus respectively. For example, Applicative and Alternative can be used to define the following:
(<&&>) :: Applicative f => f a -> f b -> f (a, b)
a <&&> b = (,) <$> a <*> b
(<||>) :: Alternative f => f a -> f b -> f (Either a b)
a <||> b = (Left <$> a) <|> (Right <$> b)
However, in all of these cases (as well as with ArrowChoice), the product type class is a prerequisite for the sum type class. Are there type class rules or common functions that depend on the prerequisite class? The Typeclassopedia touches on these relationships, but unfortunately I couldn't find any explicit reason for the dependency.
Arrow is basically the class for monoidal categories1 – with “monoid” not referring to Monoid, but the product-monoid of Haskell types. I.e., with unit element () and multiplication (,). Now, sum types make up a monoid as well, and that's what ArrowChoice uses. These two classes are in that sense complementary; ArrowChoice shouldn't really be a subclass of Arrow.
In a monoidal category, you can then go on to have monoidal functors. How these come out depends on what you use as your type-monoid. For (), (,), you get
class ProdMonoidalFtor f where
prodUnit :: () -> f ()
prodZip :: (f a, f b) -> f (a,b)
type (+) = Either
class SumMonoidalFtor f where
sumUnit :: Void -> f Void
sumZip :: f a + f b -> f (a+b)
Turns out the latter is basically useless, because Void is the initial object of Hask, meaning there exists exactly one Void -> a (namely absurd) for all types a. However, what does make some sense is comonoidal functors with +:
class SumCoMonoidalFtor f where
sumCounit :: f Void -> Void -- I bet you find this useless too, but it's not totally.
sumCozip :: f (a+b) -> f a + f b
That in turn wouldn't make sense for product types, because () is the terminal object.
What's interesting now is that ProdMonoidalFtor is equivalent to Applicative:
instance (ProdMonoidalFtor f) => Applicative f where
pure x = fmap (const x) $ prodUnit ()
fs <*> xs = fmap (\(f,x) -> f x) $ prodZip (fs,xs)
One might then suspect that Alternative is equivalent to SumMonoidalFtor, but it's not! Actually, it is equivalent to decisive functors, which are to comonads as applicatives are to monads.
Whereas Alternative and MonadPlus don't really seem to have much mathematical backing, they're essentially what you get when “un-Kleisliing” the ArrowChoice class, but using the Kleisli category that arises from ProdMonoidalFtor. It's all a bit dubious.
1That's considering only first/left, second/right, and ***/+++. As for the remaining &&&, ||| and arr, these are more specific and IMO belong in seperate classes.
I would like to have a function for either mapping a pure function to a container or sequencing applicative/monadic action through it. For pure mapping we have
fmap :: Functor f => (a -> b) -> (f a -> f b)
For monadic sequencing we have (from Data.Taversable)
mapM :: (Traversable f, Monad m) => (a -> m b) -> (f a -> m (f b))
Which is similar to
mapKleisli :: (Traversable f, Monad m) => Kleisli m a b -> Kleisli m (f a) (f b)
mapKleisli = Kleisli . mapM . runKleisli
We know both (->) and (Kleisli m) are categories (and arrows). So it's naturally to make a generalization:
mapCategory :: (X f, Category c) => c a b -> c (f a) (f b)
Do you know such a class X with similar method? Maybe, somewhere on hackage? I tried to hoogle/hayoo but haven't found anything appropriate.
Update:
Now I know better what I need. Both Kleisli arrows and (->) are instances of ArrowApply which is as powerful as Monad. I came up with this arrow-based version of Travesable:
{-# LANGUAGE TypeOperators #-}
import Prelude hiding (id, (.), mapM)
import Control.Arrow
import Control.Category
class Traversable f where
traverse :: ArrowApply (~>) => f a -> (a ~> b) ~> f b
mapArrow :: (ArrowApply (~>), Traversable f) => a ~> b -> f a ~> f b
mapArrow a = arr (\x -> (traverse x, a)) >>> app
instance Traversable Maybe where
traverse Nothing = arr (const Nothing)
traverse (Just x) = arr (\a -> (a, x)) >>> app >>> arr Just
instance Traversable [] where
traverse [] = arr (const [])
traverse (x : xs) = undefined -- this is hard!
I could use just usual Applicative-based Traversable, with Identity for pure functions, but I'm not sure it is good. Considering pure functions as special case of monadic actions is weird. Interpreting both pure functions and monadic actions as instances of some action class (Category/Arrow/ArrowApply) looks more straightforward to me.
Questions: would you like to finish instance for []? Has my opinion about ArrowApply vs Monad any sense?
You're asking for "some class X", but it should be pretty clear that the most (or perhaps, only) correct name for this class would be "Functor". What you want is simply a functor class defined for an arbitrary Category instance, rather than limited to (->).
Of course, your definition is still limited to (endo)functors from a category to a subcategory defined by the type constructor giving the instance. If you generalize a bit further, there's no reason for the two categories to be the same, giving you a type class something like this one:
class (Category r, Category t) => Functor f r t | f r -> t, f t -> r where
fmap :: r a b -> t (f a) (f b)
This is still awfully limited vs. the full concept of a functor in category theory, but oh well.
It's also interesting to observe that this still has a (->) type constructor in it--that's because, even though we're modeling the source and target categories with arbitrary instances, the whole thing (and in particular, the functor itself) still exists in some sense in Hask, i.e., the category associated with (->). The other half of the functor (the part mapping objects) is, roughly speaking, the (->) in the kind * -> * for the type constructor f.