Why is there no IO monad transformer in Haskell [duplicate] - haskell

Every other monad comes with a transformer version, and from what I know the idea of a transformer is a generic extension of monads. Following how the other transformers are build, IOT would be something like
newtype IOT m a = IOT { runIOT :: m (IO a) }
for which I could make up useful applications on the spot: IOT Maybe can either do an IO action or nothing, IOT [] can build a list that can later be sequenced.
So why is there no IO transformer in Haskell?
(Notes: I've seen this post on Haskell Cafe, but can't make much sense of it. Also, the Hackage page for the ST transformer mentions a possibly related issue in its description, but doesn't offer any details.)

Consider the specific example of IOT Maybe. How would you write a Monad instance for that? You could start with something like this:
instance Monad (IOT Maybe) where
return x = IOT (Just (return x))
IOT Nothing >>= _ = IOT Nothing
IOT (Just m) >>= k = IOT $ error "what now?"
where m' = liftM (runIOT . k) m
Now you have m' :: IO (Maybe (IO b)), but you need something of type Maybe (IO b), where--most importantly--the choice between Just and Nothing should be determined by m'. How would that be implemented?
The answer, of course, is that it wouldn't, because it can't. Nor can you justify an unsafePerformIO in there, hidden behind a pure interface, because fundamentally you're asking for a pure value--the choice of Maybe constructor--to depend on the result of something in IO. Nnnnnope, not gonna happen.
The situation is even worse in the general case, because an arbitrary (universally quantified) Monad is even more impossible to unwrap than IO is.
Incidentally, the ST transformer you mention is implemented differently from your suggested IOT. It uses the internal implementation of ST as a State-like monad using magic pixie dust special primitives provided by the compiler, and defines a StateT-like transformer based on that. IO is implemented internally as an even more magical ST, and so a hypothetical IOT could be defined in a similar way.
Not that this really changes anything, other than possibly giving you better control over the relative ordering of impure side effects caused by IOT.

Related

Should I prefer MonadUnliftIO or MonadMask for bracketting like functions?

I'm currently building a new API, and one of the functions it currently provides is:
inSpan :: Tracer -> Text -> IO a -> IO a
I'm looking to move that Tracer into a monad, giving me a signature more like
inSpan :: MonadTracer m => Text -> m a -> m a
The implementation of inSpan uses bracket, which means I have two main options:
class MonadUnliftIO m => MonadTracer m
or
class MonadMask m => MonadTracer m
But which should I prefer? Note that I'm in control of all the types I've mentioned, which makes me slightly lean towards MonadMask as it doesn't enforce IO at the bottom (that is, we could perhaps have a pure MonadTracer instance).
Is there anything else I should consider?
Let's lay out the options first (repeating some of your question in the process):
MonadMask from the exceptions library. This can work on a wide range of monads and transformers, and does not require that the base monad be IO.
MonadUnliftIO from the unliftio-core (or unliftio) library. This library only works for monads with IO at their base, and which is somehow isomorphic to ReaderT env IO.
MonadBaseControl from the monad-control library. This library will require IO at the base, but will allow non-ReaderT.
Now the tradeoffs. MonadUnliftIO is the newest addition to the fray, and has the least developed library support. This means that, in addition to the limitations of what monads can be instances, many good instances just haven't been written yet.
The important question is: why does MonadUnliftIO make this seemingly arbitrary requirement around ReaderT-like things? This is to prevent issues with lost monadic state. For example, the semantics of bracket_ (put 1) (put 2) (put 3) are not really clear, and therefore MonadUnliftIO disallows a StateT instance.
MonadBaseControl relaxes the ReaderT restriction and has wider library support. It's also considered more complicated internally than the other two, but for your usages that shouldn't really matter. And it allows you to make mistakes with the monadic state as mentioned above. If you're careful in your usage, this won't matter.
MonadMask allows totally pure transformer stacks. I think there's a good argument to be had around the usefulness of modeling asynchronous exceptions in a pure stack, but I understand this kind of approach is something people want to do sometimes. In exchange for getting more instances, you still have the limitations around monadic state, plus the inability to lift some IO control actions, like timeout or forkIO.
My recommendation:
If you want to match the way most people are doing things today, it's probably best to choose MonadMask, it's the most well adopted solution.
If you want that goal, but you also need to do a timeout or withMVar or something, use MonadBaseControl.
And if you know there's a specific set of monads you need compatibility with, and want compile time guarantees about the correctness of your code vis-a-vis monadic state, use MonadUnliftIO.

lift, return, and a transformer type constructor

For well over a year, I have been intensely using lift, return, and constructors such as EitherT, ReaderT, and so forth. I've read Real World Haskell, Learn You a Haskell, almost every monad tutorial out there, and tried writing my own. Yet, I constantly remain confused about these three operations. Any time I am writing new code I try to figure out which of the three to use, and it almost always takes me an hour or more on the first function in a particular block of code.
What is an intuitive understanding of the three? Simple types are insufficient, as in all three cases I can instantly recite the types to you. What is a meaning for what these do that is consistent across all of the standard monad transformers?
(Unfortunately, if you respond in math terms, I'm still not going to understand you. While I can write code to solve math problems and can set up time complexity based on the code I see, I cannot after many years of trying to work in Haskell relate math terms to programming terms.)
return takes a pure computation and turns it into a computation which claims to have some monad-y side-effects, but doesn't.
lift takes a computation that has some side-effects, and adds more.
EitherT, ReaderT, and so on take a computation that already has all the side-effects you're interested in and "spells them differently" -- for example, where before your state was spelled as a function that returns an updated value, it is now spelled as a State(T)-ful computation.
So let's say you have a computation. In a lazy language like Haskell you'd write
comp1 :: a
and know that this computation will be performed upon request and result in a value of type a.
Let's say you have a similar computation, but in addition to computing a value of type a, it might "fail" for some reason or another. For example, a might be Integer and this computation will "fail" if its a division by zero. We're write this now as
comp2 :: Maybe a
where the Maybe constructor "tags" the a to indicate failure.
Let's say we have a similar computation as before, but now we are allowed to fail, but also collect a log during the computation. "Log collecting" is called Writer so we'd like to tag our type with Writer as well as Maybe. Unfortunately
comp3_bad :: (Writer String) Maybe a
doesn't make any sense. The definition of writer allows for a single parameter, not two. We can consider a bit of what the underlying mechanics of this combined effect would be, though—it needs to return a Maybe paired with the log... or perhaps if the computation fails, the log is discarded. There are two options
comp3_1 :: (String, Maybe a)
comp3_2 :: Maybe (String, a)
If we unpack the Writer, we can see that these are equivalent to
comp3_1' :: Writer String (Maybe a)
comp3_2' :: Maybe (Writer String a)
This pattern of nesting is called composition. If you want to combine the effects of two monads then you'd like to compose them. For some monads this works directly, though it's a little cumbersome.
Unfortunately, some monads start to break the monad laws once they are composed. They can still be "stacked" but not in the normal way. So, we allow each type to determine its stacking method by creating the transformer version <monad>T.
newtype WriterT w m a = WriterT { runWriterT :: m (w, a) }
newtype MaybeT m a = MaybeT { runMaybeT :: m (Maybe a) }
-- note that
WriterT String Maybe a == Maybe (String, a)
MaybeT (Writer String) a == (String, Maybe a)
These composed stacks of monads are called monad transformer stacks and they allow you to assemble side effects in layers.
So what happens if we have two different, but similar stacks that we'd like to use together. For instance, we can consider Maybe to be a monad... or a monad transformer stack of a single layer. Compare that to WriterT String Maybe which is a monad transformer stack of two layers, the bottom of which is Maybe.
These two stacks are very similar, but we cannot transport computations from one to the other. Or rather, we can, but it's fairly annoying
transport :: Maybe a -> WriterT String Maybe a
transport Nothing = WriterT Nothing
transport (Just a) = WriterT (Just ("", a))
this transport forms a general pattern where we "add another layer" onto a stack. This general pattern is called lift
lift :: Maybe a -> WriterT String Maybe a
Or, written polymorphically we see the extra layer t being prepended.
lift :: MonadTrans t => m a -> t m a
Finally, we've come a long way from our pure computation at the beginning
comp1 :: a
and demonstrated that we can lift simple transformer stacks into more complex ones. Can we consider comp1 to be living in the very simplest of transformer stacks—the empty stack?
It turns out that this is actually a really valid point of view. We can even "lift" comp1 into a more sophisticated transformer stack... but the terminology changes slightly.
return :: Monad m => a -> m a
So, it's valid to think of return as lifting a pure computation into a basic monad. This is a foundational principle of monads even—that they can embed pure computations within them.

Is 'Chaining operations' the "only" thing that the Monad class solves?

To clarify the question: it is about the merits of the monad type class (as opposed to just its instances without the unifying class).
After having read many references (see below),
I came to the conclusion that, actually, the monad class is there to solve only one, but big and crucial, problem: the 'chaining' of functions on types with context. Hence, the famous sentence "monads are programmable semicolons".
In fact, a monad can be viewed as an array of functions with helper operations.
I insist on the difference between the monad class, understood as a general interface for other types; and these other types instantiating the class (thus, "monadic types").
I understand that the monad class by itself, only solves the chaining of operators because mainly, it only mandates its type instances
to have bind >>= and return, and tell us how they must behave. And as a bonus, the compiler greatyly helps the coding providing do notation for monadic types.
On the other hand,
it is each individual type instantiating the monad class which solves each concrete problem, but not merely for being a instance of Monad. For instance Maybe solves "how a function returns a value or an error", State solves "how to have functions with global state", IO solves "how to interact with the outside world", and so on. All theses classes encapsulate a value within a context.
But soon or later, we will need to chain operations on such context-types. I.e., we will need to organize calls to functions on these types in a particular sequence (for an example of such a problem, please read the example about multivalued functions in You could have invented monads).
And you get solved the problem of chaining, if you have each type be an instance of the monad class.
For the chaining to work you need >>= just with the exact signature it has, no other. (See this question).
Therefore, I guess that the next time you define a context data type T for solving something, if you need to sequence calls of functions (on values of T) consider making T an instance of Monad (if you need "chaining with choice" and if you can benefit from the do notation). And to make sure you are doing it right, check that T satisfies the monad laws
Then, I ask two questions to the Haskell experts:
A concrete question: is there any other problem that the monad class solves by ifself (leaving apart monadic classes)? If any, then, how it compares in relevance to the problem of chaining operations?
An optional general question: are my conclusions right, am I misunderstanding something?
References
Tutorials
Monads in pictures Definitely worth it; read this one first.
Fistful of monads
You could have invented monads
Monads are trees (pdf)
StackOverflow Questions & Answers
How to detect a monad
On the signature of >>= monad operator
You're definitely on to something in the way that you're stating this—there are many things that Monad means and you've separated them out well.
That said, I would definitely say that chaining operations is not the primary thing solved by Monads. That can be solved using plain Functors (with some trouble) or easily with Applicatives. You need to use the full monad spec when "chaining with choice". In particular, the tension between Applicative and Monad comes from Applicative needing to know the entire structure of the side-effecting computation statically. Monad can change that structure at runtime and thus sacrifices some analyzability for power.
To make the point more clear, the only place you deal with a Monad but not any specific monad is if you're defining something with polymorphism constrained to be a Monad. This shows up repeatedly in the Control.Monad module, so we can examine some examples from there.
sequence :: [m a] -> m [a]
forever :: m a -> m b
foldM :: (a -> b -> m a) -> a -> [b] -> m a
Immediately, we can throw out sequence as being particular to Monad since there's a corresponding function in Data.Traversable, sequenceA which has a type slightly more general than Applicative f => [f a] -> f [a]. This ought to be a clear indicator that Monad isn't the only way to sequence things.
Similarly, we can define foreverA as follows
foreverA :: Applicative f => f a -> f b
foreverA f = flip const <$> f <*> foreverA f
So more ways to sequence non-Monad types. But we run into trouble with foldM
foldM :: (Monad m) => (a -> b -> m a) -> a -> [b] -> m a
foldM _ a [] = return a
foldM f a (x:xs) = f a x >>= \fax -> foldM f fax xs
If we try to translate this definition to Applicative style we might write
foldA :: (Applicative f) => (a -> b -> f a) -> a -> [b] -> f a
foldA _ a [] = pure a
foldA f a (x:xs) = foldA f <$> f a x <*> xs
But Haskell will rightfully complain that this doesn't typecheck--each recursive call to foldA tries to put another "layer" of f on the result. With Monad we could join those layers down, but Applicative is too weak.
So how does this translate to Applicatives restricting us from runtime choices? Well, that's exactly what we express with foldM, a monadic computation (a -> b -> m a) which depends upon its a argument, a result from a prior monadic computation. That kind of thing simply doesn't have any meaning in the more purely sequential world of Applicative.
To solve the problem of chaining operations on an individual monadic type, it's not at all necessary to make it an instance of Monad and be sure the monad laws are satisfied. You could just implement a chaining operation directly on your type.
It would probably be very similar to the monadic bind, but not necessarily exactly the same (recall that bind for lists is concatMap, a function that exists anyway, but with the arguments in a different order). And you wouldn't have to worry about the monad laws, because you would have a slightly different interface for each type, so they wouldn't have any common requirements.
To ask what problem the Monad type class itself solves, look at all the functions (in Control.Monad and else where) that work on values in any monadic type. The problem solved is code reuse! Monad is exactly the part of all the monadic types that is common to each and every one of them. That part is sufficient on its own to write useful computations. All of these functions could be implemented for any individual monadic type (often more directly), but they've already been implemented for all monadic types, even the ones that don't exist yet.
You don't write a Monad instance so that you can chain operations on your type (often you already have a way of chaining, in fact). You write a Monad instance for all the code that automatically comes along with the Monad instance. Monad isn't about solving any problem for any single type, it's about a way of viewing many disparate types as instances of a single unifying concept.

Partially lift with liftIO

I'm trying to do something that's probably impossible. I have a type that is an instance of MonadIO. If you liftIO an IO action in a context where this type is the base monad of some transformer stack, it will work fine. So, what I'd like to be able to do is take a value that's already been lifted part-way (to my type) and lift it "the rest of the way" in one step.
I can do this in two ways. One is that my type can actually be trivially re-embedded into normal IO, so I can do this:
liftMore :: (MonadIO m) => MyType a -> m a
liftMore x = liftIO $ embedMyTypeInIO x
And this works. However, this also provides a way to fully escape from my type if used in context where just IO is the base monad, which is undesirable.
I can also do this by building a new typeclass like MonadIO that uses my type as a base, but then it needs to be instantiated for everything, which is very undesirable. I tried using a newtype wrapper to make every monad transformer an instance of such a class, but couldn't quite get it.
Any ideas on strategies I could try to accomplish this? (I'm willing to play with language extensions, but of course a solution that is Haskell98 is much preferrable).

Working on permuted monad transformer stack

One of the problems with monad transformers I find is the need to lift the operations into the right monad. A single lift here and there isn't bad, but sometimes there are functions that looks like this:
fun = do
lift a
lift b
c
lift d
lift e
f
I would like to be able to write this function thus:
fun = monadInvert $ do
a
b
lift c
d
e
lift f
This halves the number of lifts and makes the code cleaner.
The question is: for what monads is monadInvert possible? How should one create this function?
Bonus points: define it for monad m which is an instance of MonadIO.
The title of this question speaks of permutations: indeed, how can we deal with arbitrary permutations of a monad tranformer stack?
Well, first of all, you don't actually need so much lifting. For monad transformers the following identities hold:
lift c >>= lift . f = lift (c >>= f)
lift c1 >> lift c2 = lift (c1 >> c2)
It's not uncommon to write:
x <- lift $ do
{- ... -}
Next is: When you use libraries like mtl or monadLib (i.e. type class based libraries instead of transformers directly), you actually can access most underlying monads directly:
c :: StateT MyState (ReaderT MyConfig SomeOtherMonad) Result
c = do
x <- ask
y <- get
{- ... -}
Finally, if you really need a lot of lifting despite these two points, you should consider writing a custom monad or even use an entirely different abstraction. I find myself using the automaton arrow for stateful computations instead of a state monad.
You might be interested in Monads, Zippers and Views, Virtualizing the Monad Stack by Tom Schrijvers and Bruno Oliveira.
This doesn't address your point about reducing the lifts, but it's an interesting approach to your "monad permutations" problem.
Here's the abstract:
This work aims at making monadic components more reusable and robust
to changes by employing two new techniques for virtualizing the monad
stack: the monad zipper and monad views. The monad zipper is a monad
transformer that creates virtual monad stacks by ignoring particular
layers in a concrete stack. Monad views provide a general framework
for monad stack virtualization: they take the monad zipper one step
further and integrate it with a wide range of other virtualizations.
For instance, particular views allow restricted access to monads in
the stack. Furthermore, monad views can be used by components to
provide a call-by-reference-like mechanism to access particular layers
of the monad stack. With these two mechanisms component requirements
in terms of the monad stack shape no longer need to be literally
reflected in the concrete monad stack, making these components more
reusable and robust to changes.
I am mostly sure that what you are decribing is impossible for IO, which is always the innermost monad:
From Martin Grabmüller: Monad Transformers Step by Step, available at
http://www.grabmueller.de/martin/www/pub/
Down in this document, we call liftIO in eval6 to perform I/O actions.
Why do we need to lift in this case? Because there is no IO class for
which we can instantiate a type as. Therefore, for I/O actions, we
have to call lift to send the commands inwards
In general, for monads less restrictive than IO, (such as Error and State) order still matters for semantics, so you can't change the order of the stack just to make syntax more convenient.
For some monads, like Reader, that are central (in that they do commute in the stack), your idea seems not clearly impossible. I don't know how to write it though. I guess it would be a type class CentralMonad that ReaderT is an instance of, with some implementation...

Resources