I am trying to implement MonadUnliftIO for Snap and analyzing Snap classes.
I discovered that ap is used for implementing Applicative while ap requires Monad and Monad requires Applicative. It looks like a loop.
I thought till now that is not possible to write such things.
What is the limit for such kind of trick?
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
class Applicative m => Monad m where
return :: a -> m a
instance Applicative Snap where
pure x = ...
(<*>) = ap
ap :: Monad m => m (a -> b) -> m a -> m b
This only works because Snap has a Monad instance (and it's actually in scope at that point).
Effectively, the compiler handles declarations in two separate passes: first it resolves all the instance heads
instance Applicative Snap
instance Monad Snap
...without even looking in the actual method implementations. This works out fine: Monad is happy as long as it sees the Applicative instance.
So then it already knows that Snap is a monad. Then it proceeds to typecheck the (<*>) implementation, notices that it requires the Monad instance, and... yeah, it's there, so that too is fine.
The actual reason we have ap :: Monad m => ... is mostly historical: the Haskell98 Monad class did not have Applicative or even Functor as a superclass, so it was possible to write code Monad m => ... that could then not use fmap or <*>. Therefore the liftM and ap functions were introduced as replacement.
Then, when the better current class hierarchy was established, many instances were simply defined by referring back to the already existing Monad instance, which is after all sufficient for everything.
IMO it is usually a good idea to directly implement <*> and definitely fmap before writing the Monad instance, rather than the other way around.
I think you are imagining a cycle like this:
(<*>) is implemented with ap
(>>=) is implemented with (<*>)
ap is implemented using (>>=)
And yes, if you try this, it will indeed give you an infinite loop!
However, this is not what your code block does. Its implementations look more like this:
(>>=) is implemented from first principles, without using any Applicative functions
ap is implemented using (>>=)
(<*>) is implemented in terms of ap
Which is obviously fine — there’s no cycles of any sort in this set of function definitions.
One thing which might still be a bit confusing is: how can you implement an Applicative function in terms of a Monad function, when a type can only be a Monad if it is already Applicative? To answer this, let’s add explicit type signatures to your code sample (note this requires language extensions to compile):
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
class Applicative m => Monad m where
return :: a -> m a
instance Applicative Snap where
pure :: a -> Snap a
pure x = ...
(<*>) :: Snap (a -> b) -> Snap a -> Snap b
(<*>) = ap
ap :: Monad m => m (a -> b) -> m a -> m b
The answer is now clear: we are not in fact defining (<*>) for just any arbitrary Applicative type! Rather, we are defining it for Snap only, which means we can use any function defined to work on Snaps — including those from the Monad typeclass. The fact that this function happens to be within an instance Applicative Snap block doesn’t matter: in all other respects, it’s just an ordinary function definition, and there’s no reason why the full range of Snap functions shouldn’t be able to appear in it.
There should be some instance Monad Snap somewhere else. The ap use in the Applicative instance will make use of >>= from that instance.
In general, an instance for Applicative can not make use of ap in this way, but when then applicative is also a monad, I think it is quite common to do so, since it's convenient.
Note that, if one chooses this route, it should avoid using <*> or ap inside the definition of >>=, since that could lead to infinite recursion.
The fact that the two instances are mutually recursive, in some sense, is not an issue. Haskell allows mutual recursion, and this also reflects on instances. The programmer however must ensure that the recursion actually terminates, or be prepared to have a non-terminating program.
Related
In Scalaz every Monad instance is automatically an instance of Applicative.
implicit val listInstance = new Monad[List] {
def point[A](a: => A) = List(a)
def bind[A, B](fa: List[A])(f: A => List[B]) = fa flatMap f
}
List(2) <*> List((x: Int) => x + 1) // Works!
Another example: Arrow is automatically a Profunctor.
However, in Haskell I must provide an instance of Applicative for every Monad again and again.
Is it possible to avoid this repetitive job?
The problem comes when there are two places from which to derive the Applicative instance. For instance, suppose m is the type a b where Arrow a. Then there's an obvious instance of Applicative from this definition as well. Which one should the compiler use? It should work out the same, of course, but Haskell has no way to check this. By making us write out the instances, Haskell at least forces us to think about the consistency of our definitions.
If you want, there's the WrappedMonad class in Control.Applicative, which provides all the obvious instances with a newtype wrapper, but using WrapMonad and unwrapMonad all the time isn't that attractive either.
It isn't currently possible, though it would be if you changed the existing library to support this. Turning DefaultSignatures on would let you write
class Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
default pure :: Monad f => a -> f a
default (<*>) :: Monad f => f (a -> b) -> f a -> f b
pure = return
(<*>) = ap
Then once you had implemented instance Monad M where {- ... -}, a simple instance Applicative M (with no where or method definitions) would inherit these default implementations. I'm not sure why this wasn't done.
Is there a way to implement bind for nested monads? What I want is the following signature:
(>>>=) :: (Monad m, Monad n) => m (n a) -> (a -> m (n b)) -> m (n b)
It looks like it should be a trivial task, but I somehow just can't wrap my head around it. In my program, I use this pattern for several different combinations of monads and for each combination, I can implement it. But for the general case, I just don't understand it.
Edit: It seems that it is not possible in the general case. But it is certainly possible in some special cases. E.g. if the inner Monad is a Maybe. Since it IS possible for all the Monads I want to use, having additional constraints seems fine for me. So I change the question a bit:
What additional constraints do I need on n such that the following is possible?
(>>>=) :: (Monad m, Monad n, ?? n) => m (n a) -> (a -> m (n b)) -> m (n b)
Expanding on the comments: As the linked questions show, it is necessary to have some function n (m a) -> m (n a) to even have a chance to make the composition a monad.
If your inner monad is a Traversable, then sequence provides such a function, and the following will have the right type:
(>>>=) :: (Monad m, Monad n, Traversable n) => m (n a) -> (a -> m (n b)) -> m (n b)
m >>>= k = do
a <- m
b <- sequence (fmap k a)
return (join b)
Several well-known transformers are in fact simple newtype wrappers over something equivalent to this (although mostly defining things with pattern matching instead of literally using the inner monads' Monad and Traversable instances):
MaybeT based on Maybe
ExceptT based on Either
WriterT based on (,) ((,) doesn't normally have its Monad instance defined, and WriterT is using the wrong tuple order to make use of it if it had - but in spirit it could have).
ListT based on []. Oh, whoops...
The last one is in fact notorious for not being a monad unless the lifted monad is "commutative" - otherwise, expressions that should be equal by the monad laws can give different order of effects. My hunch is that this comes essentially from lists being able to contain more than one value, unlike the other, reliably working examples.
So, although the above definition will be correctly typed, it can still break the monad laws.
Also as an afterthought, one other transformer is such a nested monad, but in a completely different way: ReaderT, based on using (->) as the outer monad.
In Scalaz every Monad instance is automatically an instance of Applicative.
implicit val listInstance = new Monad[List] {
def point[A](a: => A) = List(a)
def bind[A, B](fa: List[A])(f: A => List[B]) = fa flatMap f
}
List(2) <*> List((x: Int) => x + 1) // Works!
Another example: Arrow is automatically a Profunctor.
However, in Haskell I must provide an instance of Applicative for every Monad again and again.
Is it possible to avoid this repetitive job?
The problem comes when there are two places from which to derive the Applicative instance. For instance, suppose m is the type a b where Arrow a. Then there's an obvious instance of Applicative from this definition as well. Which one should the compiler use? It should work out the same, of course, but Haskell has no way to check this. By making us write out the instances, Haskell at least forces us to think about the consistency of our definitions.
If you want, there's the WrappedMonad class in Control.Applicative, which provides all the obvious instances with a newtype wrapper, but using WrapMonad and unwrapMonad all the time isn't that attractive either.
It isn't currently possible, though it would be if you changed the existing library to support this. Turning DefaultSignatures on would let you write
class Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
default pure :: Monad f => a -> f a
default (<*>) :: Monad f => f (a -> b) -> f a -> f b
pure = return
(<*>) = ap
Then once you had implemented instance Monad M where {- ... -}, a simple instance Applicative M (with no where or method definitions) would inherit these default implementations. I'm not sure why this wasn't done.
In my current project I've run into the need to turn various monads into their transformer counterparts e.g.
stateT :: Monad m => State s a -> StateT s m a
stateT stf = StateT $ return . runState stf
It's trivial to write these utility functions for the monads I need, but I was wondering if there already exists a library that contains this functionality for the standard monads and maybe a typeclass that abstracts this sort of transformation. Something like
class (Monad f, MonadTrans t) => LiftTrans f t | f -> t where
liftT :: Monad m => f a -> t m a
("lift" is probably the wrong term to use here, but I wasn't sure what else to call it.)
Check out function hoist from the mmorph package.
Its signature is
hoist :: Monad m => (forall a. m a -> n a) -> t m b -> t n b
Meaning that it can change the base monad underlying a transformer.
Now, in the trasformers package, many "basic" monads are implemented as transformers applied to the Identity monad, like this:
type State s = StateT s Identity
Therefore, we can define the following function (taken form the Generalizing base monads section of the mmorph documentation):
import Data.Functor.Identity
generalize :: (Monad m) => Identity a -> m a
generalize m = return (runIdentity m)
and combine it with hoist:
hoist generalize :: (Monad m, MFunctor t) => t Identity b -> t m b
This method won't work for simple monads which are not defined as transformers applied to Identity, like the Maybe and Either monads. You are stuck with hoistMaybe and hoistEither for these.
You are trying to go the wrong way. If something is a transformer, than the plain version of it is the transformer applied to the Identity monad. (They are not always implemented like that directly, but should be equivalent modulo bottom.) But, not all monads are transformers (They need to be universally left or universally right adjoint in order to transform. [citation needed])
hoist and friends are super helpful, too, but I think it best to invert your process.
When reading up on type classes I have seen that the relationship between Functors, Applicative Functors, and Monads is that of strictly increasing power. Functors are types that can be mapped over. Applicative Functors can do the same things with certain effects. Monads the same with possibly unrestrictive effects. Moreover:
Every Monad is an Applicative Functor
Every Applicative Functor is a Functor
The definition of the Applicative Functor shows this clearly with:
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
But the definition of Monad is:
class Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
(>>) :: m a -> m b -> m b
m >> n = m >>= \_ -> n
fail :: String -> m a
According to Brent Yorgey's great typeclassopedia that an alternative definition of monad could be:
class Applicative m => Monad' m where
(>>=) :: m a -> (a -> m b) -> m b
which is obviously simpler and would cement that Functor < Applicative Functor < Monad. So why isn't this the definition? I know applicative functors are new, but according to the 2010 Haskell Report page 80, this hasn't changed. Why is this?
Everyone wants to see Applicative become a superclass of Monad, but it would break so much code (if return is eliminated, every current Monad instance becomes invalid) that everyone wants to hold off until we can extend the language in such a way that avoids breaking the code (see here for one prominent proposal).
Haskell 2010 was a conservative, incremental improvement in general, standardising only a few uncontroversial extensions and breaking compatibility in one area to bring the standard in line with every existing implementation. Indeed, Haskell 2010's libraries don't even include Applicative — less of what people have come to expect from the standard library is standardised than you might expect.
Hopefully we'll see the situation improve soon, but thankfully it's usually only a mild inconvenience (having to write liftM instead of fmap in generic code, etc.).
Changing the definition of Monad at this point, would have broken a lot of existing code (any piece of code that defines a Monad instance) to be worthwhile.
Breaking backwards-compatibility like that is only worthwhile if there is a large practical benefit to the change. In this case the benefit is not that big (and mostly theoretical anyway) and wouldn't justify that amount of breakage.