Does Higher order polymorphism require strict order of arguments? - haskell

Reading LYAH, I stumbled upon this piece of code:
newtype Writer w a = Writer { runWriter :: (a, w) }
instance (Monoid w) => Monad (Writer w) where
return x = Writer (x, mempty)
(Writer (x,v)) >>= f = let (Writer (y, v')) = f x in Writer (y, v `mappend` v')
While trying to understand what the heck is Writer w in the first line, I discovered this not being a full type, but a sort of type constructor with 1 argument, like Maybe for Maybe String
Looks great, but what if the initial type if Writer' is defined with swapped type arguments, like this:
newtype Writer' a w = Writer' { runWriter :: (a, w) }
Is it possible to implement Monad instance now? Something like this, but what could actually be compiled:
instance (Monoid w) => Monad (\* -> Writer' * monoid) where
The idea of \* -> Writer' * monoid is the same as Writer w :
A type constructor with one type argument missing -- this time first one.

This is not possible in Haskell, what you'd need is a type-level lambda function, which does not exist.
There are type synonyms which you can use to define reorderings of type variables:
type Writer'' a w = Writer' a w
but you can not give class instances for partially applied type synonyms (even with the TypeSynonymInstances extension).
I wrote my MSc thesis about the subject of how type-level lambdas can be added to GHC: https://xnyhps.nl/~thijs/share/paper.pdf to be used in type-class instances without sacrificing type inference.

What you are seeing here is a parochial design choice of Haskell. It makes perfect sense, conceptually speaking, to say that your Writer' type is a functor if you "leave out" its first parameter. And a programming language syntax could be invented to allow such declarations.
The Haskell community hasn't done so, because what they have is relatively simple and it works well enough. This isn't to say that alternative designs aren't possible, but to be adopted such a design would have to:
Be no more complex to use in practice than what we already have;
Offer functionality or advantage that would be worth the switch.
This generalizes to many other ways that the Haskell community uses types; often the choice to represent something as a type distinction is tied to some artifact of the language's design. Many monad transformers are good examples, like MaybeT:
newtype MaybeT m a = MaybeT { runMaybeT :: m (Maybe a) }
instance Functor m => Functor (MaybeT m) where ...
instance Applicative m => Applicative (MaybeT m) where ...
instance Monad m => Monad (MaybeT m) where ...
instance MonadTrans MaybeT where ...
Since it's a newtype, this means that MaybeT IO String is isomorphic to IO (Maybe String); you can think of the two types as being two "perspectives" on the same set of values:
IO (Maybe String) is an IO action that produces values of type Maybe String;
MaybeT IO String is a MaybeT IO action that produces values of type String.
The difference between the perspectives is that they imply different implementations of the Monad operations. In Haskell then this is also tied to the following parochial technical facts:
In one String is the last type parameter (the "values") and in the other Maybe String is;
IO and MaybeT IO have different instances for the Monad class.
But maybe there is a language design where you could say that the type IO (Maybe a) can have a monad specific to it, and distinct from the monad for the more general IO a type. That language would incur some complexity to make that distinction consistently (e.g., rules to determine which Monad instance to by default for IO (Maybe String) and rules to allow the programmer to override the default choice). And I'd wager modestly that the end result would be no less complex than what we do have. TL;DR: Meh.

Related

Why are monad transformers different to stacking monads?

In many cases, it isn't clear to me what is to be gained by combining two monads with a transformer rather than using two separate monads. Obviously, using two separate monads is a hassle and can involve do notation inside do notation, but are there cases where it just isn't expressive enough?
One case seems to be StateT on List: combining monads doesn't get you the right type, and if you do obtain the right type via a stack of monads like Bar (where Bar a = (Reader r (List (Writer w (Identity a))), it doesn't do the right thing.
But I'd like a more general and technical understanding of exactly what monad transformers are bringing to the table, when they are and aren't necessary, and why.
To make this question a little more focused:
What is an actual example of a monad with no corresponding transformer (this would help illustrate what transformers can do that just stacking monads can't).
Are StateT and ContT the only transformers that give a type not equivalent to the composition of them with m, for an underlying monad m (regardless of which order they're composed.)
(I'm not interested in particular implementation details as regards different choices of libraries, but rather the general (and probably Haskell independent) question of what monad transformers/morphisms are adding as an alternative to combining effects by stacking a bunch of monadic type constructors.)
(To give a little context, I'm a linguist who's doing a project to enrich Montague grammar - simply typed lambda calculus for composing word meanings into sentences - with a monad transformer stack. It would be really helpful to understand whether transformers are actually doing anything useful for me.)
Thanks,
Reuben
To answer you question about the difference between Writer w (Maybe a) vs MaybeT (Writer w) a, let's start by taking a look at the definitions:
newtype WriterT w m a = WriterT { runWriterT :: m (a, w) }
type Writer w = WriterT w Identity
newtype MaybeT m a = MaybeT { runMaybeT :: m (Maybe a) }
Using ~~ to mean "structurally similar to" we have:
Writer w (Maybe a) == WriterT w Identity (Maybe a)
~~ Identity (Maybe a, w)
~~ (Maybe a, w)
MaybeT (Writer w) a ~~ (Writer w) (Maybe a)
== Writer w (Maybe a)
... same derivation as above ...
~~ (Maybe a, w)
So in a sense you are correct -- structurally both Writer w (Maybe a) and MaybeT (Writer w) a
are the same - both are essentially just a pair of a Maybe value and a w.
The difference is how we treat them as monadic values.
The return and >>= class functions do very different things depending
on which monad they are part of.
Let's consider the pair (Just 3, []::[String]). Using the association
we have derived above here's how that pair would be expressed in both monads:
three_W :: Writer String (Maybe Int)
three_W = return (Just 3)
three_M :: MaybeT (Writer String) Int
three_M = return 3
And here is how we would construct a the pair (Nothing, []):
nutin_W :: Writer String (Maybe Int)
nutin_W = return Nothing
nutin_M :: MaybeT (Writer String) Int
nutin_M = MaybeT (return Nothing) -- could also use mzero
Now consider this function on pairs:
add1 :: (Maybe Int, String) -> (Maybe Int, String)
add1 (Nothing, w) = (Nothing w)
add1 (Just x, w) = (Just (x+1), w)
and let's see how we would implement it in the two different monads:
add1_W :: Writer String (Maybe Int) -> Writer String (Maybe Int)
add1_W e = do x <- e
case x of
Nothing -> return Nothing
Just y -> return (Just (y+1))
add1_M :: MaybeT (Writer String) Int -> MaybeT (Writer String) Int
add1_M e = do x <- e; return (e+1)
-- also could use: fmap (+1) e
In general you'll see that the code in the MaybeT monad is more concise.
Moreover, semantically the two monads are very different...
MaybeT (Writer w) a is a Writer-action which can fail, and the failure is
automatically handled for you. Writer w (Maybe a) is just a Writer
action which returns a Maybe. Nothing special happens if that Maybe value
turns out to be Nothing. This is exemplified in the add1_W function where
we had to perform a case analysis on x.
Another reason to prefer the MaybeT approach is that we can write code
which is generic over any monad stack. For instance, the function:
square x = do tell ("computing the square of " ++ show x)
return (x*x)
can be used unchanged in any monad stack which has a Writer String, e.g.:
WriterT String IO
ReaderT (WriterT String Maybe)
MaybeT (Writer String)
StateT (WriterT String (ReaderT Char IO))
...
But the return value of square does not type check against Writer String (Maybe Int) because square does not return a Maybe.
When you code in Writer String (Maybe Int), you code explicitly reveals
the structure of monad making it less generic. This definition of add1_W:
add1_W e = do x <- e
return $ do
y <- x
return $ y + 1
only works in a two-layer monad stack whereas a function like square
works in a much more general setting.
What is an actual example of a monad with no corresponding transformer (this would help illustrate what transformers can do that just stacking monads can't).
IO and ST are the canonical examples here.
Are StateT and ContT the only transformers that give a type not equivalent to the composition of them with m, for an underlying monad m (regardless of which order they're composed.)
No, ListT m a is not (isomorphic to) [m a]:
newtype ListT m a =
ListT { unListT :: m (Maybe (a, ListT m a)) }
To make this question a little more focused:
What is an actual example of a monad with no corresponding transformer (this would help illustrate what transformers can do that just stacking monads can't).
There are no known examples of a monad that lacks a transformer, as long as the monad is defined explicitly as a pure lambda-calculus term, with no side effects and no external libraries being used. The Haskell monads such as IO and ST are essentially interfaces to an external library defined by low-level code. Those monads cannot be defined by pure lambda-calculus, and their monad transformers probably do not exist.
Even though there are no known explicit examples of monads without transformers, there is also no known general method or algorithm for obtaining a monad transformer for a given monad. If I define some complicated monad, for example like this code in Haskell:
type D a = Either a ((a -> Bool) -> Maybe a)
then it is far from obvious how to define a transformer for the monad D.
This D a may look a contrived and artificial example (and it's also not obvious why it is a monad) but there might be legitimate cases for using that monad, which is a "free pointed monad on the Search monad on Maybe".
To clarify: A "search monad on n" is the type S n q a = (a -> n q) -> n a where n is another monad and q is a fixed type.
A "free pointed monad on M" is the type P a = Either a (M a) where M is another monad.
In any case, I just want to illustrate the point. I don't think it would be easy for anyone to come up with the monad transformer for D and then to prove that it satisfies the laws of monad transformers. There is no known algorithm that takes the code of D and outputs the code of its transformer.
Are StateT and ContT the only transformers that give a type not equivalent to the composition of them with m, for an underlying monad m (regardless of which order they're composed.)
Monad transformers are necessary because stacking two monads is not always a monad. Most "simple" monads, like Reader, Writer, Maybe, etc., stack with other monads in a particular order. But the result of stacking, say, Writer + Reader + Maybe, is a more complicated monad that no longer allows stacking with new monads.
There are several examples of monads that do not stack at all: State, Cont, List, Free monads, the Codensity monad, and a few other, less well known monads, like the "free pointed" monad shown above.
For each of those "non-stacking" monads, one needs to guess the correct monad transformer somehow.
I have studied this question for a while and I have assembled a list of techniques for creating monad transformers, together with full proofs of all laws. There doesn't seem to be any system to creating a monad transformer for a specific monad. I even found a couple of monads that have two inequivalent transformers.
Generally, monad transformers can be classified in 6 different families:
Functor composition in one or another order: EitherT, WriterT, ReaderT and a generalization of Reader to a special class of monads, called "rigid" monads. An example of a "rigid" monad is Q a = (H a) -> a where H is an arbitrary (but fixed) contravariant functor.
The "adjunction recipe": StateT, ContT, CodensityT, SearchT, which gives transformers that are not functorial.
The "recursive recipe": ListT, FreeT
Cartesian product of monads: If M and N are monads then their Cartesian product, type P a = (M a, N a) is also a monad whose transformer is the Cartesian product of transformers.
The free pointed monad: P a = Either a (M a) where M is another monad. For that monad, the transformer's type is m (Either a (MT m a)) where MT is the monad M's transformer.
Monad stacks, that is, monads obtained by applying one or more monad transformers to some other monad. A monad stack's transformer is build via a special recipe that uses all the transformers of the individual monads in the stack.
There may be monads that do not fit into any of these cases, but I have seen no examples so far.
Details and proofs of these constructions of monad transformers are in my draft book here https://github.com/winitzki/sofp

How to use a typeclass like `HasDynFlags m` in GHC

While playing with GHC code base, I find a typeclass named HasDynFlags:
class HasDynFlags m where
getDynFlags :: m DynFlags
Although the typeclass name looks self-explanatory, I couldn't find other
constraints in the typeclass definition that says m has to be Monad or at least Functor so we can get access to that value.
However, most use of it I find in the code base is inside a do-notation, e.g dynFlag <- getDynFlags where m is further constrainted to be an instance of Monad.
My questions are:
For HasDynFlags m, does m have to be at least Functor to make this typeclass useful?
If the answer to the first question is no, then how are we supposed to get access to a value of DynFlags given getDynFlags :: m DynFlags, without any further knowledge about m?
According to the class definition,
class HasDynFlags m where
getDynFlags :: m DynFlags
m is satisfied by kind (* -> *). The kind (* -> *) is implied by the type m DynFlags, which demonstrates that m is a type constructor taking exactly one type parameter.
There are no further constraints on m here. Specifically, the resulting type needn't be a Functor (or Monad), although given common naming conventions for type variables in Haskell, there's a good chance Monad is the motivating case.
EDIT: To answer the second question, the Functor or Monad class constraints we expect are introduced in more specific contexts. For example, consider the type,
(HasDynFlags m, Monad m) => m DynFlags
I think that's all there is to it.

Is it better to define Functor in terms of Applicative in terms of Monad, or vice versa?

This is a general question, not tied to any one piece of code.
Say you have a type T a that can be given an instance of Monad. Since every monad is an Applicative by assigning pure = return and (<*>) = ap, and then every applicative is a Functor via fmap f x = pure f <*> x, is it better to define your instance of Monad first, and then trivially give T instances of Applicative and Functor?
It feels a bit backward to me. If I were doing math instead of programming, I would think that I would first show that my object is a functor, and then continue adding restrictions until I have also shown it to be a monad. I know Haskell is merely inspired by Category Theory and obviously the techniques one would use when constructing a proof aren't the techniques one would use when writing a useful program, but I'd like to get an opinion from the Haskell community. Is it better to go from Monad down to Functor? or from Functor up to Monad?
I tend to write and see written the Functor instance first. Doubly so because if you use the LANGUAGE DeriveFunctor pragma then data Foo a = Foo a deriving ( Functor ) works most of the time.
The tricky bits are around agreement of instances when your Applicative can be more general than your Monad. For instance, here's an Err data type
data Err e a = Err [e] | Ok a deriving ( Functor )
instance Applicative (Err e) where
pure = Ok
Err es <*> Err es' = Err (es ++ es')
Err es <*> _ = Err es
_ <*> Err es = Err es
Ok f <*> Ok x = Ok (f x)
instance Monad (Err e) where
return = pure
Err es >>= _ = Err es
Ok a >>= f = f a
Above I defined the instances in Functor-to-Monad order and, taken in isolation, each instance is correct. Unfortunately, the Applicative and Monad instances do not align: ap and (<*>) are observably different as are (>>) and (*>).
Err "hi" <*> Err "bye" == Err "hibye"
Err "hi" `ap` Err "bye" == Err "hi"
For sensibility purposes, especially once the Applicative/Monad Proposal is in everyone's hands, these should align. If you defined instance Applicative (Err e) where { pure = return; (<*>) = ap } then they will align.
But then, finally, you may be capable of carefully teasing apart the differences in Applicative and Monad so that they behave differently in benign ways---such as having a lazier or more efficient Applicative instance. This actually occurs fairly frequently and I feel the jury is still a little bit out on what "benign" means and under what kinds of "observation" should your instances align. Perhaps some of the most gregarious use of this is in the Haxl project at Facebook where the Applicative instance is more parallelized than the Monad instance, and thus is far more efficient at the cost of some fairly severe "unobserved" side effects.
In any case, if they differ, document it.
I often choose a reverse approach as compared to the one in Abrahamson's answer. I manually define only the Monad instance and define the Applicative and Functor in terms of it with the help of already defined functions in the Control.Monad, which renders those instances the same for absolutely any monad, i.e.:
instance Applicative SomeMonad where
pure = return
(<*>) = ap
instance Functore SomeMonad where
fmap = liftM
While this way the definition of Functor and Applicative is always "brain-free" and very easy to reason about, I must note that this is not the ultimate solution, since there are cases, when the instances can be implemented more efficiently or even provide new features. E.g., the Applicative instance of Concurrently executes things ... concurrently, while the Monad instance can only execute them sequentially due to the nature of monads.
Functor instances are typically very simple to define, I'd normally do those by hand.
For Applicative and Monad, it depends. pure and return are usually similarly easy, and it really doesn't matter in which class you put the expanded definition. For bind, it is sometimes benfitial to go the "category way", i.e. define a specialised join' :: (M (M x)) -> M x first and then a>>=b = join' $ fmap b a (which of course wouldn't work if you had defined fmap in terms of >>=). Then it's probably useful to just re-use (>>=) for the Applicative instance.
Other times, the Applicative instance can be written quite easily or is more efficient than the generic Monad-derived implementation. In that case, you should definitely define <*> separately.
The magic here, that the Haskell uses the Kleisli-tiplet notation of a monad, that
is more convenient way, if somebody wants to use monads in imperative programming like tools.
I asked the same question, and the answer come after a while, if you see the definitions
of the Functor, Applicative, Monad in haskell you miss one link, which is the original definition of the monad, which contains only the join operation, that can be found on the HaskellWiki.
With this point of view you will see how haskell monads are built up functor, applicative functors, monads and Kliesli triplet.
A rough explanation can be found here: https://github.com/andorp/pearls/blob/master/Monad.hs
And other with the same ideas here: http://people.inf.elte.hu/pgj/haskell2/jegyzet/08/Monad.hs
I think you mis-understand how sub-classes work in Haskell. They aren't like OO sub-classes! Instead, a sub-class constraint, like
class Applicative m => Monad m
says "any type with a canonical Monad structure must also have a canonical Applicative structure". There are two basic reasons why you would place a constraint like that:
The sub-class structure induces a super-class structure.
The super-class structure is a natural subset of the sub-class structure.
For example, consider:
class Vector v where
(.^) :: Double -> v -> v
(+^) :: v -> v -> v
negateV :: v -> v
class Metric a where
distance :: a -> a -> Double
class (Vector v, Metric v) => Norm v where
norm :: v -> Double
The first super-class constraint on Norm arises because the concept of a normed space is really weak unless you also assume a vector space structure; the second arises because (given a vector space) a Norm induces a Metric, which you can prove by observing that
instance Metric V where
distance v0 v1 = norm (v0 .^ negateV v1)
is a valid Metric instance for any V with a valid Vector instance and a valid norm function. We say that the norm induces a metric. See http://en.wikipedia.org/wiki/Normed_vector_space#Topological_structure .
The Functor and Applicative super-classes on Monad are like Metric, not like Vector: the return and >>= functions from Monad induce Functor and Applicative structures:
fmap: can be defined as fmap f a = a >>= return . f, which was liftM in the Haskell 98 standard library.
pure: is the same operation as return; the two names is a legacy from when Applicative wasn't a super-class of Monad.
<*>: can be defined as af <*> ax = af >>= \ f -> ax >>= \ x -> return (f x), which was liftM2 ($) in the Haskell 98 standard library.
join: can be defined as join aa = aa >>= id.
So it's perfectly sensible, mathematically, to define the Functor and Applicative operations in terms of Monad.

Transformation under Transformers

I'm having a bit of difficulty with monad transformers at the moment. I'm defining a few different non-deterministic relations which make use of transformers. Unfortunately, I'm having trouble understanding how to translate cleanly from one effectful model to another.
Suppose these relations are "foo" and "bar". Suppose that "foo" relates As and Bs to Cs; suppose "bar" relates Bs and Cs to Ds. We will define "bar" in terms of "foo". To make matters more interesting, the computation of these relations will fail in different ways. (Since the bar relation depends on the foo relation, its failure cases are a superset.) I therefore give the following type definitions:
data FooFailure = FooFailure String
data BarFailure = BarSpecificFailure | BarFooFailure FooFailure
type FooM = ListT (EitherT FooFailure (Reader Context))
type BarM = ListT (EitherT BarFailure (Reader Context))
I would then expect to be able to write the relations with the following function signatures:
foo :: A -> B -> FooM C
bar :: B -> C -> BarM D
My problem is that, when writing the definition for "bar", I need to be able to receive errors from the "foo" relation and properly represent them in "bar" space. So I'd be fine with a function of the form
convert :: (e -> e') -> ListT (EitherT e (Reader Context) a
-> ListT (EitherT e' (Reader Context) a
I can even write that little beast by running the ListT, mapping on EitherT, and then reassembling the ListT (because it happens that m [a] can be converted to ListT m a). But this seems... messy.
There's a good reason I can't just run a transformer, do some stuff under it, and generically "put it back"; the transformer I ran might have effects and I can't magically undo them. But is there some way in which I can lift a function just far enough into a transformer stack to do some work for me so I don't have to write the convert function shown above?
I think convert is a good answer, and using Control.Monad.Morph and Control.Monad.Trans.Either it's (almost) really simple to write:
convert :: (Monad m, Functor m, MFunctor t)
=> (e -> e')
-> t (EitherT e m) b -> t (EitherT e' m) b
convert f = hoist (bimapEitherT f id)
the slight problem is that ListT isn't an instance of MFunctor. I think this is the author boycotting ListT because it doesn't follow the monad transformer laws though because it's easy to write a type-checking instance
instance MFunctor ListT where hoist nat (ListT mas) = ListT (nat mas)
Anyway, generally take a look at Control.Monad.Morph for dealing with natural transformations on (parts of) transformer stacks. I'd say that fits the definition of lifting a function "just enough" into a stack.

Why aren't monad transformers constrained to yield monads?

In the MonadTrans class:
class MonadTrans t where
-- | Lift a computation from the argument monad to the constructed monad.
lift :: Monad m => m a -> t m a
why isn't t m constrained to be a Monad? i.e., why not:
{-# LANGUAGE MultiParamTypeClasses #-}
class Monad (t m) => MonadTrans t m where
lift :: Monad m => m a -> t m a
If the answer is "because that's just the way it is", that's fine -- it's just confusing for a n008.
You suggested the following:
class Monad (t m) => MonadTrans t m where
lift :: Monad m => m a -> t m a
...but does that really mean what you want? It seems you want to express something like "a type t may be an instance of MonadTrans if, for all m :: * -> * where m is an instance of Monad, t m is also an instance of Monad".
What the class definition above actually says is more like "types t and m may constitute an instance of MonadTrans if, for those specific types, t m is an instance of Monad". Consider carefully the difference, and the implied potential for instances that may not be what you'd want.
In the general case, every parameter of a type class is an independent "argument", a fact which has been a bountiful source of both headaches and GHC extensions as people have attempted to use MPTCs.
Which isn't to say that such a definition couldn't be used anyway--as you point out, the current definition is not ideal either. The age-old problem "Why Data.Set Is Not a Functor" is related, and such issues helped motivate the recent ConstraintKinds tomfoolery.
The ultimate answer to "why not" here is almost certainly the one given by Daniel Fischer in the comments--because MonadTrans is pretty core functionality, it would be undesirable to make it depend on some terrifying cascade of increasingly arcane GHC extensions.

Resources