In Scalaz every Monad instance is automatically an instance of Applicative.
implicit val listInstance = new Monad[List] {
def point[A](a: => A) = List(a)
def bind[A, B](fa: List[A])(f: A => List[B]) = fa flatMap f
}
List(2) <*> List((x: Int) => x + 1) // Works!
Another example: Arrow is automatically a Profunctor.
However, in Haskell I must provide an instance of Applicative for every Monad again and again.
Is it possible to avoid this repetitive job?
The problem comes when there are two places from which to derive the Applicative instance. For instance, suppose m is the type a b where Arrow a. Then there's an obvious instance of Applicative from this definition as well. Which one should the compiler use? It should work out the same, of course, but Haskell has no way to check this. By making us write out the instances, Haskell at least forces us to think about the consistency of our definitions.
If you want, there's the WrappedMonad class in Control.Applicative, which provides all the obvious instances with a newtype wrapper, but using WrapMonad and unwrapMonad all the time isn't that attractive either.
It isn't currently possible, though it would be if you changed the existing library to support this. Turning DefaultSignatures on would let you write
class Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
default pure :: Monad f => a -> f a
default (<*>) :: Monad f => f (a -> b) -> f a -> f b
pure = return
(<*>) = ap
Then once you had implemented instance Monad M where {- ... -}, a simple instance Applicative M (with no where or method definitions) would inherit these default implementations. I'm not sure why this wasn't done.
Related
I am trying to implement MonadUnliftIO for Snap and analyzing Snap classes.
I discovered that ap is used for implementing Applicative while ap requires Monad and Monad requires Applicative. It looks like a loop.
I thought till now that is not possible to write such things.
What is the limit for such kind of trick?
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
class Applicative m => Monad m where
return :: a -> m a
instance Applicative Snap where
pure x = ...
(<*>) = ap
ap :: Monad m => m (a -> b) -> m a -> m b
This only works because Snap has a Monad instance (and it's actually in scope at that point).
Effectively, the compiler handles declarations in two separate passes: first it resolves all the instance heads
instance Applicative Snap
instance Monad Snap
...without even looking in the actual method implementations. This works out fine: Monad is happy as long as it sees the Applicative instance.
So then it already knows that Snap is a monad. Then it proceeds to typecheck the (<*>) implementation, notices that it requires the Monad instance, and... yeah, it's there, so that too is fine.
The actual reason we have ap :: Monad m => ... is mostly historical: the Haskell98 Monad class did not have Applicative or even Functor as a superclass, so it was possible to write code Monad m => ... that could then not use fmap or <*>. Therefore the liftM and ap functions were introduced as replacement.
Then, when the better current class hierarchy was established, many instances were simply defined by referring back to the already existing Monad instance, which is after all sufficient for everything.
IMO it is usually a good idea to directly implement <*> and definitely fmap before writing the Monad instance, rather than the other way around.
I think you are imagining a cycle like this:
(<*>) is implemented with ap
(>>=) is implemented with (<*>)
ap is implemented using (>>=)
And yes, if you try this, it will indeed give you an infinite loop!
However, this is not what your code block does. Its implementations look more like this:
(>>=) is implemented from first principles, without using any Applicative functions
ap is implemented using (>>=)
(<*>) is implemented in terms of ap
Which is obviously fine — there’s no cycles of any sort in this set of function definitions.
One thing which might still be a bit confusing is: how can you implement an Applicative function in terms of a Monad function, when a type can only be a Monad if it is already Applicative? To answer this, let’s add explicit type signatures to your code sample (note this requires language extensions to compile):
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
class Applicative m => Monad m where
return :: a -> m a
instance Applicative Snap where
pure :: a -> Snap a
pure x = ...
(<*>) :: Snap (a -> b) -> Snap a -> Snap b
(<*>) = ap
ap :: Monad m => m (a -> b) -> m a -> m b
The answer is now clear: we are not in fact defining (<*>) for just any arbitrary Applicative type! Rather, we are defining it for Snap only, which means we can use any function defined to work on Snaps — including those from the Monad typeclass. The fact that this function happens to be within an instance Applicative Snap block doesn’t matter: in all other respects, it’s just an ordinary function definition, and there’s no reason why the full range of Snap functions shouldn’t be able to appear in it.
There should be some instance Monad Snap somewhere else. The ap use in the Applicative instance will make use of >>= from that instance.
In general, an instance for Applicative can not make use of ap in this way, but when then applicative is also a monad, I think it is quite common to do so, since it's convenient.
Note that, if one chooses this route, it should avoid using <*> or ap inside the definition of >>=, since that could lead to infinite recursion.
The fact that the two instances are mutually recursive, in some sense, is not an issue. Haskell allows mutual recursion, and this also reflects on instances. The programmer however must ensure that the recursion actually terminates, or be prepared to have a non-terminating program.
The Monad typeclass can be defined in terms of return and (>>=). However, if we already have a Functor instance for some type constructor f, then this definition is sort of 'more than we need' in that (>>=) and return could be used to implement fmap so we're not making use of the Functor instance we assumed.
In contrast, defining return and join seems like a more 'minimal'/less redundant way to make f a Monad. This way, the Functor constraint is essential because fmap cannot be written in terms of these operations. (Note join is not necessarily the only minimal way to go from Functor to Monad: I think (>=>) works as well.)
Similarly, Applicative can be defined in terms of pure and (<*>), but this definition again does not take advantage of the Functor constraint since these operations are enough to define fmap.
However, Applicative f can also be defined using unit :: f () and (>*<) :: f a -> f b -> f (a, b). These operations are not enough to define fmap so I would say in some sense this is a more minimal way to go from Functor to Applicative.
Is there a characterization of Monad as fmap, unit, (>*<), and some other operator which is minimal in that none of these functions can be derived from the others?
(>>=) does not work, since it can implement a >*< b = a >>= (\ x -> b >>= \ y -> pure (x, y)) where pure x = fmap (const x) unit.
Nor does join since m >>= k = join (fmap k m) so (>*<) can be implemented as above.
I suspect (>=>) fails similarly.
I have something, I think. It's far from elegant, but maybe it's enough to get you unstuck, at least. I started with join :: m (m a) -> ??? and asked "what could it produce that would require (<*>) to get back to m a?", which I found a fruitful line of thought that probably has more spoils.
If you introduce a new type T which can only be constructed inside the monad:
t :: m T
Then you could define a join-like operation which requires such a T:
joinT :: m (m a) -> m (T -> a)
The only way we can produce the T we need to get to the sweet, sweet a inside is by using t, and then we have to combine that with the result of joinT somehow. There are two basic operations that can combine two ms into one: (<*>) and joinT -- fmap is no help. joinT is not going to work, because we'll just need yet another T to use its result, so (<*>) is the only option, meaning that (<*>) can't be defined in terms of joinT.
You could roll that all up into an existential, if you prefer.
joinT :: (forall t. m t -> (m (m a) -> m (t -> a)) -> r) -> r
In Scalaz every Monad instance is automatically an instance of Applicative.
implicit val listInstance = new Monad[List] {
def point[A](a: => A) = List(a)
def bind[A, B](fa: List[A])(f: A => List[B]) = fa flatMap f
}
List(2) <*> List((x: Int) => x + 1) // Works!
Another example: Arrow is automatically a Profunctor.
However, in Haskell I must provide an instance of Applicative for every Monad again and again.
Is it possible to avoid this repetitive job?
The problem comes when there are two places from which to derive the Applicative instance. For instance, suppose m is the type a b where Arrow a. Then there's an obvious instance of Applicative from this definition as well. Which one should the compiler use? It should work out the same, of course, but Haskell has no way to check this. By making us write out the instances, Haskell at least forces us to think about the consistency of our definitions.
If you want, there's the WrappedMonad class in Control.Applicative, which provides all the obvious instances with a newtype wrapper, but using WrapMonad and unwrapMonad all the time isn't that attractive either.
It isn't currently possible, though it would be if you changed the existing library to support this. Turning DefaultSignatures on would let you write
class Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
default pure :: Monad f => a -> f a
default (<*>) :: Monad f => f (a -> b) -> f a -> f b
pure = return
(<*>) = ap
Then once you had implemented instance Monad M where {- ... -}, a simple instance Applicative M (with no where or method definitions) would inherit these default implementations. I'm not sure why this wasn't done.
Looking at the source for Monad:
class Monad m where
(>>=) :: forall a b. m a -> (a -> m b) -> m b
(>>) :: forall a b. m a -> m b -> m b
return :: a -> m a
fail :: String -> m a
{-# INLINE (>>) #-}
m >> k = m >>= \_ -> k -- <-- !! right here !!
fail s = error s
You can see that >> has a default implementation. My question is, is it considered good or bad practice, and why, to include a function/combinator in the typeclass, instead of providing it separately outside of the typeclass?
That is, why not:
class Monad m where
(>>=) :: forall a b. m a -> (a -> m b) -> m b
return :: a -> m a
fail :: String -> m a
fail s = error s
and somewhere else:
(>>) :: forall a b. m a -> m b -> m b
{-# INLINE (>>) #-}
m >> k = m >>= \_ -> k
As far as I know, there are two main reasons to include "extra" functions:
Efficiency: Sometimes an inefficient generic implementation exists, and the class's author expects instance-specific implementations to be significantly better. In such cases, including the function in the class with a default implementation means that instances can use optimized versions if they want, but aren't required to. For a fun example of this, look at Foldable. This is also true of Monad.
Choice of implementation: Often there are several subsets of the class functions that could be used; including all the potential functions and using default implementations in terms of each other means that an instance can pick some functions to implement and get the rest automatically. This also applies to Foldable, but Eq is a simpler example.
This way, custom >> can be implemented for monads where it can be done more efficiently or naturally than via m >>= \_ -> k, but a default implementation still exists.
Another argument for including methods in the typeclass is when they should satisfy certain laws, or when they make the statement of those laws clearer. I would argue that the laws ought morally to be associated with the typeclass ("what must I provide in order to declare an instance of this class?") For example, you might prefer to state the monad laws in terms of return, join and fmap, rather than return and >>=; that encourages you to put all four operators in the type class (and make Monad a subclass of Functor!), and give default definitions of >>= in terms of join and vice versa.`
According to the Typeclassopedia (among other sources), Applicative logically belongs between Monad and Pointed (and thus Functor) in the type class hierarchy, so we would ideally have something like this if the Haskell prelude were written today:
class Functor f where
fmap :: (a -> b) -> f a -> f b
class Functor f => Pointed f where
pure :: a -> f a
class Pointed f => Applicative f where
(<*>) :: f (a -> b) -> f a -> f b
class Applicative m => Monad m where
-- either the traditional bind operation
(>>=) :: (m a) -> (a -> m b) -> m b
-- or the join operation, which together with fmap is enough
join :: m (m a) -> m a
-- or both with mutual default definitions
f >>= x = join ((fmap f) x)
join x = x >>= id
-- with return replaced by the inherited pure
-- ignoring fail for the purposes of discussion
(Where those default definitions were re-typed by me from the explanation at Wikipedia, errors being my own, but if there are errors it is at least in principle possible.)
As the libraries are currently defined, we have:
liftA :: (Applicative f) => (a -> b) -> f a -> f b
liftM :: (Monad m) => (a -> b) -> m a -> m b
and:
(<*>) :: (Applicative f) => f (a -> b) -> f a -> f b
ap :: (Monad m) => m (a -> b) -> m a -> m b
Note the similarity between these types within each pair.
My question is: are liftM (as distinct from liftA) and ap (as distinct from <*>), simply a result of the historical reality that Monad wasn't designed with Pointed and Applicative in mind? Or are they in some other behavioral way (potentially, for some legal Monad definitions) distinct from the versions that only require an Applicative context?
If they are distinct, could you provide a simple set of definitions (obeying the laws required of Monad, Applicative, Pointed, and Functor definitions described in the Typeclassopedia and elsewhere but not enforced by the type system) for which liftA and liftM behave differently?
Alternatively, if they are not distinct, could you prove their equivalence using those same laws as premises?
liftA, liftM, fmap, and . should all be the same function, and they must be if they satisfy the functor law:
fmap id = id
However, this is not checked by Haskell.
Now for Applicative. It's possible for ap and <*> to be distinct for some functors simply because there could be more than one implementation that satisfies the types and the laws. For example, List has more than one possible Applicative instance. You could declare an applicative as follows:
instance Applicative [] where
(f:fs) <*> (x:xs) = f x : fs <*> xs
_ <*> _ = []
pure = repeat
The ap function would still be defined as liftM2 id, which is the Applicative instance that comes for free with every Monad. But here you have an example of a type constructor having more than one Applicative instance, both of which satisfy the laws. But if your monads and your applicative functors disagree, it's considered good form to have different types for them. For example, the Applicative instance above does not agree with the monad for [], so you should really say newtype ZipList a = ZipList [a] and then make the new instance for ZipList instead of [].
They can differ, but they shouldn't.
They can differ because they can have different implementations: one is defined in an instance Applicative while the other is defined in an instance Monad. But if they indeed differ, then I'd say the programmer who wrote those instances wrote misleading code.
You are right: the functions exist as they do for historical reasons. People have strong ideas about how things should have been.