Is 'Chaining operations' the "only" thing that the Monad class solves? - haskell

To clarify the question: it is about the merits of the monad type class (as opposed to just its instances without the unifying class).
After having read many references (see below),
I came to the conclusion that, actually, the monad class is there to solve only one, but big and crucial, problem: the 'chaining' of functions on types with context. Hence, the famous sentence "monads are programmable semicolons".
In fact, a monad can be viewed as an array of functions with helper operations.
I insist on the difference between the monad class, understood as a general interface for other types; and these other types instantiating the class (thus, "monadic types").
I understand that the monad class by itself, only solves the chaining of operators because mainly, it only mandates its type instances
to have bind >>= and return, and tell us how they must behave. And as a bonus, the compiler greatyly helps the coding providing do notation for monadic types.
On the other hand,
it is each individual type instantiating the monad class which solves each concrete problem, but not merely for being a instance of Monad. For instance Maybe solves "how a function returns a value or an error", State solves "how to have functions with global state", IO solves "how to interact with the outside world", and so on. All theses classes encapsulate a value within a context.
But soon or later, we will need to chain operations on such context-types. I.e., we will need to organize calls to functions on these types in a particular sequence (for an example of such a problem, please read the example about multivalued functions in You could have invented monads).
And you get solved the problem of chaining, if you have each type be an instance of the monad class.
For the chaining to work you need >>= just with the exact signature it has, no other. (See this question).
Therefore, I guess that the next time you define a context data type T for solving something, if you need to sequence calls of functions (on values of T) consider making T an instance of Monad (if you need "chaining with choice" and if you can benefit from the do notation). And to make sure you are doing it right, check that T satisfies the monad laws
Then, I ask two questions to the Haskell experts:
A concrete question: is there any other problem that the monad class solves by ifself (leaving apart monadic classes)? If any, then, how it compares in relevance to the problem of chaining operations?
An optional general question: are my conclusions right, am I misunderstanding something?
References
Tutorials
Monads in pictures Definitely worth it; read this one first.
Fistful of monads
You could have invented monads
Monads are trees (pdf)
StackOverflow Questions & Answers
How to detect a monad
On the signature of >>= monad operator

You're definitely on to something in the way that you're stating this—there are many things that Monad means and you've separated them out well.
That said, I would definitely say that chaining operations is not the primary thing solved by Monads. That can be solved using plain Functors (with some trouble) or easily with Applicatives. You need to use the full monad spec when "chaining with choice". In particular, the tension between Applicative and Monad comes from Applicative needing to know the entire structure of the side-effecting computation statically. Monad can change that structure at runtime and thus sacrifices some analyzability for power.
To make the point more clear, the only place you deal with a Monad but not any specific monad is if you're defining something with polymorphism constrained to be a Monad. This shows up repeatedly in the Control.Monad module, so we can examine some examples from there.
sequence :: [m a] -> m [a]
forever :: m a -> m b
foldM :: (a -> b -> m a) -> a -> [b] -> m a
Immediately, we can throw out sequence as being particular to Monad since there's a corresponding function in Data.Traversable, sequenceA which has a type slightly more general than Applicative f => [f a] -> f [a]. This ought to be a clear indicator that Monad isn't the only way to sequence things.
Similarly, we can define foreverA as follows
foreverA :: Applicative f => f a -> f b
foreverA f = flip const <$> f <*> foreverA f
So more ways to sequence non-Monad types. But we run into trouble with foldM
foldM :: (Monad m) => (a -> b -> m a) -> a -> [b] -> m a
foldM _ a [] = return a
foldM f a (x:xs) = f a x >>= \fax -> foldM f fax xs
If we try to translate this definition to Applicative style we might write
foldA :: (Applicative f) => (a -> b -> f a) -> a -> [b] -> f a
foldA _ a [] = pure a
foldA f a (x:xs) = foldA f <$> f a x <*> xs
But Haskell will rightfully complain that this doesn't typecheck--each recursive call to foldA tries to put another "layer" of f on the result. With Monad we could join those layers down, but Applicative is too weak.
So how does this translate to Applicatives restricting us from runtime choices? Well, that's exactly what we express with foldM, a monadic computation (a -> b -> m a) which depends upon its a argument, a result from a prior monadic computation. That kind of thing simply doesn't have any meaning in the more purely sequential world of Applicative.

To solve the problem of chaining operations on an individual monadic type, it's not at all necessary to make it an instance of Monad and be sure the monad laws are satisfied. You could just implement a chaining operation directly on your type.
It would probably be very similar to the monadic bind, but not necessarily exactly the same (recall that bind for lists is concatMap, a function that exists anyway, but with the arguments in a different order). And you wouldn't have to worry about the monad laws, because you would have a slightly different interface for each type, so they wouldn't have any common requirements.
To ask what problem the Monad type class itself solves, look at all the functions (in Control.Monad and else where) that work on values in any monadic type. The problem solved is code reuse! Monad is exactly the part of all the monadic types that is common to each and every one of them. That part is sufficient on its own to write useful computations. All of these functions could be implemented for any individual monadic type (often more directly), but they've already been implemented for all monadic types, even the ones that don't exist yet.
You don't write a Monad instance so that you can chain operations on your type (often you already have a way of chaining, in fact). You write a Monad instance for all the code that automatically comes along with the Monad instance. Monad isn't about solving any problem for any single type, it's about a way of viewing many disparate types as instances of a single unifying concept.

Related

Composing non-distributive monads in recursion-schemes

One of my favourite things about the recursion schemes in Haskell are the generalised morphisms (gcata etc.) that allow interleaving (co-)monadic computations with recursion, using a monad transformer library. For example, as described in this great blog post.
However, I've run into a problem; to be able to use these functions we need the (co-)monads to be (co-)sequentiable. Consider a type signature for gana:
gana : Monad m => (forall z . m (f z) -> f (m z)) -> (a -> f (m a)) -> a -> b
The first argument essentially says that m must have a sequence operator.
Unfortunately, I've found that in practice, there are monads which are not distributive. For example:
A monad representing a database transaction. When aborted, the transaction can be rolled back; if sequenced, it can only be rolled back to the point where it was sequenced.
A concurrency monad, representing a lock on a resource, or an atomic computation. When sequenced, the lock is lost momentarily.
In this case, it is still possible to write a specialised recursion scheme that interleaves the monadic execution; but you lose the ability to fuse it using a monad transformer. i.e. if you want to combine such non-distributive monads, they can not be fused using a transformer with the monad/comonad in the f-(co)algebra. Concretely, I couldn't use a monad transformer to combine a DBTransaction monad with an apomorphism (ExceptT/EitherT); I'd need to write a custom recursion-scheme from scratch.
My question is whether anyone has suggestions for working around this limitation.

Why does Haskell contain so many equivalent functions

It seems like there are a lot of functions that do the same thing, particularly relating to Monads, Functors, and Applicatives.
Examples (from most to least generic):
fmap == liftA == liftM
(<*>) == ap
liftA[2345] == liftM[2345]
pure == return
(*>) == (>>)
An example not directly based on the FAM class tree:
fmap == map
(I thought there were quite a few more with List, Foldable, Traversable, but it looks like most were made more generic some time ago, as I only see the old, less generic type signatures in old stack overflow / message board questions)
I personally find this annoying, as it means that if I need to do x, and some function such as liftM allows me to do x, then I will have made my function less generic than it could have been, and I am only going to notice that kind of thing by thoroughly reasoning about the differences between types (such as FAM, or perhaps List, Foldable, Traversable combinations as well), which is not beginner friendly at all, as while simply using those types isn't all that hard, reasoning about their properties and laws requires a lot more mental effort.
I am guessing a lot of these equivalencies come from the Applicative Monad Proposal. If that is the reason for them (and not some other reason I am missing for having less generic functions available for confusion), are they going to be deprecated / deleted ever? I can understand waiting a long time to delete them, due to breaking existing code, but surely deprecation is a good idea?
The short answers are "history" and "regularity".
Originally "map" was defined for lists. Then type-classes were introduced, with the Functor type class, so the generalised version of "map" for any functor had to be called something different, otherwise existing code would be broken. Hence "fmap".
Then monads came along. Instances of monads did not need to be functors, so "liftM" was created, along with "liftM2", "liftM3" etc. Of course if a type is an instance of both Monad and Functor then fmap = liftM.
Monads also have "ap", used in expressions like f `ap` arg1 `ap` arg2. This was very handy, but then Applicative Functors were added. (<*>) did the same job for applicative functors as 'ap', but because many applicative functors are not monads it had to be called something different. Likewise liftAx versus liftMx and "pure" versus "return".
They aren't equivalent though. equivalent things in haskell can be interchanged with no difference at all in functionality. Consider for example pure and return
EDIT: I wrote some examples down, but they were really bad since they involved Maybe a, a type that is both an applicative and a monad, so the functions could be used pretty interchangeably.
There are types that are applicatives but not monads though (see this question for examples), and by studying the type of the following expression, we can see that this could lead to some roadbumps:
pure 1 >>= pure :: (Monad m, Num b) => m b
I personally find this annoying, as it means that if I need to do x, and some function such as liftM allows me to do x, then I will have made my function less generic than it could have been
This logic is backwards.
Normally you know in advance the type of the thing you want to write, be it IO String or (Foldable f, Monoid t, Monad m) => f (m t) -> m t or whatever. Let's take the first case, getLineCapitalized :: IO String. You could write it as
getLineCapitalized = liftM (map toUpper) getLine
or
getLineCapitalized = fmap (fmap toUpper) getLine
Is the former "less generic" because it uses the specialized functions liftM and map? Of course not. This is intrinsically an IO action that produces a list. It cannot become "more generic" by changing it to the second version since those fmaps will have their types fixed to IO and [] anyways. So, there is no advantage to the second version.
By writing the first version, you provide contextual information to the reader for free. In liftM (map foo) bar, the reader knows that bar is going to be an action in some monad that returns a list. In fmap (fmap foo) bar, it could be any sort of doubly-nested structure whatsoever. If bar is something complicated rather than just getLine, then this kind of information is helpful for understanding more easily what is going on in bar.
In general, you should write a function in two steps.
Decide what the type of the function should be. Make it as general or as specific as you want. The more general the type of the function, the stronger guarantees you get on its behavior from parametricity.
Once you have decided on the type of your function, implement it using the most specific available functions. By doing so, you are providing the most information to the reader of your function. You never lose any generality or parametricity guarantees by doing so, since those only depend on the type, which you already determined in step 1.
Edit in response to comments: I was reminded of the biggest reason to use the most specific function available, which is catching bugs. The type length :: [a] -> Int is essentially the entire reason that I still use GHC 7.8. It's never happened that I wanted to take the length of an unknown Foldable structure. On the other hand, I definitely do not want to ever accidentally take the length of a pair, or take the length of foo bar baz which I think has type [a], but actually has type Maybe [a].
In the use cases for Foldable that are not already covered by the rest of the Haskell standard, lens is a vastly more powerful alternative. If I want the "length" of a Maybe t, lengthOf _Just :: Maybe t -> Int expresses my intent clearly, and the compiler can check that the program actually matches my intent; and I can go on to write lengthOf _Nothing, lengthOf _Left, etc. Explicit is better than implicit.
There are some "redundant" functions like liftM, ap, and liftA that have a very real use and taking them out would cause loss of functionality --- you can use liftM, ap, and liftA to implement your Functor or Applicative instances if all you've written is a Monad instance. It lets you be lazy and do, say:
instance Monad Foo where
return = ...
(>>=) = ...
Now you've done all of the rewarding work of defining a Monad instance, but this won't compile. Why? Because you also need a Functor and Applicative instance.
So, because you're quickly prototyping, or lazy, or can't think of a better way, you can just get a free Functor and Applicative instance:
instance Functor Foo where
fmap = liftM
instance Applicative Foo where
pure = return
(<*>) = ap
In fact, you can just copy-and-paste that chunk of code everywhere you need to quickly define a Functor or Applicative instance when you already have a Monad instance defined.
The same goes for fmapDefault from Data.Traversable. If you've implemented Traversable, you can also implement Foldable and Functor:
instance Functor Bar where
fmap = fmapDefault
no extra work required!
There are some redundant functions, however, that really have no actual usage other than being historical accidents from a time when Functor was not a superclass of Monad. These have literally zero use/point in existing...and include things like the liftM2, liftM3 etc., and (>>) and friends.

lift, return, and a transformer type constructor

For well over a year, I have been intensely using lift, return, and constructors such as EitherT, ReaderT, and so forth. I've read Real World Haskell, Learn You a Haskell, almost every monad tutorial out there, and tried writing my own. Yet, I constantly remain confused about these three operations. Any time I am writing new code I try to figure out which of the three to use, and it almost always takes me an hour or more on the first function in a particular block of code.
What is an intuitive understanding of the three? Simple types are insufficient, as in all three cases I can instantly recite the types to you. What is a meaning for what these do that is consistent across all of the standard monad transformers?
(Unfortunately, if you respond in math terms, I'm still not going to understand you. While I can write code to solve math problems and can set up time complexity based on the code I see, I cannot after many years of trying to work in Haskell relate math terms to programming terms.)
return takes a pure computation and turns it into a computation which claims to have some monad-y side-effects, but doesn't.
lift takes a computation that has some side-effects, and adds more.
EitherT, ReaderT, and so on take a computation that already has all the side-effects you're interested in and "spells them differently" -- for example, where before your state was spelled as a function that returns an updated value, it is now spelled as a State(T)-ful computation.
So let's say you have a computation. In a lazy language like Haskell you'd write
comp1 :: a
and know that this computation will be performed upon request and result in a value of type a.
Let's say you have a similar computation, but in addition to computing a value of type a, it might "fail" for some reason or another. For example, a might be Integer and this computation will "fail" if its a division by zero. We're write this now as
comp2 :: Maybe a
where the Maybe constructor "tags" the a to indicate failure.
Let's say we have a similar computation as before, but now we are allowed to fail, but also collect a log during the computation. "Log collecting" is called Writer so we'd like to tag our type with Writer as well as Maybe. Unfortunately
comp3_bad :: (Writer String) Maybe a
doesn't make any sense. The definition of writer allows for a single parameter, not two. We can consider a bit of what the underlying mechanics of this combined effect would be, though—it needs to return a Maybe paired with the log... or perhaps if the computation fails, the log is discarded. There are two options
comp3_1 :: (String, Maybe a)
comp3_2 :: Maybe (String, a)
If we unpack the Writer, we can see that these are equivalent to
comp3_1' :: Writer String (Maybe a)
comp3_2' :: Maybe (Writer String a)
This pattern of nesting is called composition. If you want to combine the effects of two monads then you'd like to compose them. For some monads this works directly, though it's a little cumbersome.
Unfortunately, some monads start to break the monad laws once they are composed. They can still be "stacked" but not in the normal way. So, we allow each type to determine its stacking method by creating the transformer version <monad>T.
newtype WriterT w m a = WriterT { runWriterT :: m (w, a) }
newtype MaybeT m a = MaybeT { runMaybeT :: m (Maybe a) }
-- note that
WriterT String Maybe a == Maybe (String, a)
MaybeT (Writer String) a == (String, Maybe a)
These composed stacks of monads are called monad transformer stacks and they allow you to assemble side effects in layers.
So what happens if we have two different, but similar stacks that we'd like to use together. For instance, we can consider Maybe to be a monad... or a monad transformer stack of a single layer. Compare that to WriterT String Maybe which is a monad transformer stack of two layers, the bottom of which is Maybe.
These two stacks are very similar, but we cannot transport computations from one to the other. Or rather, we can, but it's fairly annoying
transport :: Maybe a -> WriterT String Maybe a
transport Nothing = WriterT Nothing
transport (Just a) = WriterT (Just ("", a))
this transport forms a general pattern where we "add another layer" onto a stack. This general pattern is called lift
lift :: Maybe a -> WriterT String Maybe a
Or, written polymorphically we see the extra layer t being prepended.
lift :: MonadTrans t => m a -> t m a
Finally, we've come a long way from our pure computation at the beginning
comp1 :: a
and demonstrated that we can lift simple transformer stacks into more complex ones. Can we consider comp1 to be living in the very simplest of transformer stacks—the empty stack?
It turns out that this is actually a really valid point of view. We can even "lift" comp1 into a more sophisticated transformer stack... but the terminology changes slightly.
return :: Monad m => a -> m a
So, it's valid to think of return as lifting a pure computation into a basic monad. This is a foundational principle of monads even—that they can embed pure computations within them.

How do you identify monadic design patterns?

I my way to learn Haskell I'm starting to grasp the monad concept and starting to use the known monads in my code but I'm still having difficulties approaching monads from a designer point of view. In OO there are several rules like, "identify nouns" for objects, watch for some kind of state and interface... but I'm not able to find equivalent resources for monads.
So how do you identify a problem as monadic in nature? What are good design patterns for monadic design? What's your approach when you realize that some code would be better refactored into a monad?
A helpful rule of thumb is when you see values in a context; monads can be seen as layering "effects" on:
Maybe: partiality (uses: computations that can fail)
Either: short-circuiting errors (uses: error/exception handling)
[] (the list monad): nondeterminism (uses: list generation, filtering, ...)
State: a single mutable reference (uses: state)
Reader: a shared environment (uses: variable bindings, common information, ...)
Writer: a "side-channel" output or accumulation (uses: logging, maintaining a write-only counter, ...)
Cont: non-local control-flow (uses: too numerous to list)
Usually, you should generally design your monad by layering on the monad transformers from the standard Monad Transformer Library, which let you combine the above effects into a single monad. Together, these handle the majority of monads you might want to use. There are some additional monads not included in the MTL, such as the probability and supply monads.
As far as developing an intuition for whether a newly-defined type is a monad, and how it behaves as one, you can think of it by going up from Functor to Monad:
Functor lets you transform values with pure functions.
Applicative lets you embed pure values and express application — (<*>) lets you go from an embedded function and its embedded argument to an embedded result.
Monad lets the structure of embedded computations depend on the values of previous computations.
The easiest way to understand this is to look at the type of join:
join :: (Monad m) => m (m a) -> m a
This means that if you have an embedded computation whose result is a new embedded computation, you can create a computation that executes the result of that computation. So you can use monadic effects to create a new computation based on values of previous computations, and transfer control flow to that computation.
Interestingly, this can be a weakness of structuring things monadically: with Applicative, the structure of the computation is static (i.e. a given Applicative computation has a certain structure of effects that cannot change based on intermediate values), whereas with Monad it is dynamic. This can restrict the optimisation you can do; for instance, applicative parsers are less powerful than monadic ones (well, this isn't strictly true, but it effectively is), but they can be optimised better.
Note that (>>=) can be defined as
m >>= f = join (fmap f m)
and so a monad can be defined simply with return and join (assuming it's a Functor; all monads are applicative functors, but Haskell's typeclass hierarchy unfortunately doesn't require this for historical reasons).
As an additional note, you probably shouldn't focus too heavily on monads, no matter what kind of buzz they get from misguided non-Haskellers. There are many typeclasses that represent meaningful and powerful patterns, and not everything is best expressed as a monad. Applicative, Monoid, Foldable... which abstraction to use depends entirely on your situation. And, of course, just because something is a monad doesn't mean it can't be other things too; being a monad is just another property of a type.
So, you shouldn't think too much about "identifying monads"; the questions are more like:
Can this code be expressed in a simpler monadic form? With which monad?
Is this type I've just defined a monad? What generic patterns encoded by the standard functions on monads can I take advantage of?
Follow the types.
If you find you have written functions with all of these types
(a -> b) -> YourType a -> YourType b
a -> YourType a
YourType (YourType a) -> YourType a
or all of these types
a -> YourType a
YourType a -> (a -> YourType b) -> YourType b
then YourType may be a monad. (I say “may” because the functions must obey the monad laws as well.)
(Remember you can reorder arguments, so e.g. YourType a -> (a -> b) -> YourType b is just (a -> b) -> YourType a -> YourType b in disguise.)
Don't look out only for monads! If you have functions of all of these types
YourType
YourType -> YourType -> YourType
and they obey the monoid laws, you have a monoid! That can be valuable too. Similarly for other typeclasses, most importantly Functor.
There's the effect view of monads:
Maybe - partiality / failure short-circuiting
Either - error reporting / short-circuiting (like Maybe with more information)
Writer - write only "state", commonly logging
Reader - read-only state, commonly environment passing
State - read / write state
Resumption - pausable computation
List - multiple successes
Once you are familiar with these effects its easy to build monads combining them with monad transformers. Note that combining some monads needs special care (particularly Cont and any monads with backtracking).
One thing important to note is there aren't many monads. There are some exotic ones that aren't in the standard libraries e.g the probability monad and variations of the Cont monad like Codensity. But unless you are doing something mathematical its unlikely you will invent (or discover) a new monad, however if you use Haskell long enough you'll build many monads that are different combinations of the standard ones.
Edit - Also note that the order you stack monad transformers results in different monads:
If you add ErrorT (transformer) to a Writer monad, you get this monad Either err (log,a) - you can only access the log if you have no error.
If you add WriterT (transfomer) to an Error monad, you get this monad (log, Either err a) which always gives access to the log.
This is sort of a non-answer, but I feel it is important to say anyways. Just ask! StackOverflow, /r/haskell, and the #haskell irc channel are all great places to get quick feedback from smart people. If you are working on a problem, and you suspect that there's some monadic magic that could make it easier, just ask! The Haskell community loves to solve problems, and is ridiculously friendly.
Don't misunderstand, I'm not encouraging you to never learn for yourself. Quite the contrary, interacting with the Haskell community is one of the best ways to learn. LYAH and RWH, 2 Haskell books that are freely available online, come highly recommended as well.
Oh, and don't forget to play, play, play! As you play around with monadic code, you'll start to get the feel of what "shape" monads have, and when monadic combinators can be useful. If you're rolling your own monad, then usually the type system will guide you to an obvious, simple solution. But to be honest, you should rarely need to roll your own instance of Monad, since Haskell libraries provide tons of useful things as mentioned by other answerers.
There's a common notion that one sees in many programming languages of an "infectious function tag" -- some special behavior for a function that must extend to its callers as well.
Rust functions can be unsafe, meaning they perform operations that can potentially violate memory unsafety. unsafe functions can call normal functions, but any function that calls an unsafe function must be unsafe as well.
Python functions can be async, meaning they return a promise rather than an actual value. async functions can call normal functions, but invocation of an async function (via await) can only be done by another async function.
Haskell functions can be impure, meaning they return an IO a rather than an a. Impure functions can call pure functions, but impure functions can only be called by other impure functions.
Mathematical functions can be partial, meaning they don't map every value in their domain to an output. The definitions of partial functions can reference total functions, but if a total function maps some of its domain to a partial function, it becomes partial as well.
While there may be ways to invoke a tagged function from an untagged function, there is no general way, and doing so can often be dangerous and threatens to break the abstraction the language tries to provide.
The benefit, then, of having tags is that you can expose a set of special primitives that are given this tag and have any function that uses these primitives make that clear in its signature.
Say you're a language designer and you recognize this pattern, and you decide that you want to allow user-defined tags. Let's say the user defined a tag Err, representing computations that may throw an error. A function using Err might look like this:
function div <Err> (n: Int, d: Int): Int
if d == 0
throwError("division by 0")
else
return (n / d)
If we wanted to simplify things, we might observe that there's nothing erroneous about taking arguments - it's computing the return value where problems might arise. So we can restrict tags to functions that take no arguments, and have div return a closure rather than the actual value:
function div(n: Int, d: Int): <Err> () -> Int
() =>
if d == 0
throwError("division by 0")
else
return (n / d)
In a lazy language such as Haskell, we don't need the closure, and can just return a lazy value directly:
div :: Int -> Int -> Err Int
div _ 0 = throwError "division by 0"
div n d = return $ n / d
It is now apparent that, in Haskell, tags need no special language support - they are ordinary type constructors. Let's make a typeclass for them!
class Tag m where
We want to be able to call an untagged function from a tagged function, which is equivalent to turning an untagged value (a) into a tagged value (m a).
addTag :: a -> m a
We also want to be able to take a tagged value (m a) and apply a tagged function (a -> m b) to get a tagged result (m b):
embed :: m a -> (a -> m b) -> m b
This, of course, is precisely the definition of a monad! addTag corresponds to return, and embed corresponds to (>>=).
It is now clear that "tagged functions" are merely a type of monad. As such, whenever you spot a place where a "function tag" could apply, chances are you've got a place suitable for a monad.
P.S. Regarding the tags I've mentioned in this answer: Haskell models impurity with the IO monad and partiality with the Maybe monad. Most languages implement async/promises fairly transparently, and there seems to be a Haskell package called promise that mimics this functionality. The Err monad is equivalent to the Either String monad. I'm not aware of any language that models memory unsafety monadically, it could be done.

Is Applicative IO implemented based on functions from Monad IO?

In "Learn You a Haskell for Great Good!" author claims that Applicative IO instance is implemented like this:
instance Applicative IO where
pure = return
a <*> b = do
f <- a
x <- b
return (f x)
I might be wrong, but it seems that both return, and do-specific constructs (some sugared binds (>>=) ) comes from Monad IO. Assuming that's correct, my actual question is:
Why Applicative IO implementation depends on Monad IO functions/combinators?
Isn't Applicative less powerfull concept than Monad?
Edit (some clarifications):
This implementation is against my intuition, because according to Typeclassopedia article it's required for a given type to be Applicative before it can be made Monad (or it should be in theory).
(...) according to Typeclassopedia article it's required for a given type to be Applicative before it can be made Monad (or it should be in theory).
Yes, your parenthetical aside is exactly the issue here. In theory, any Monad should also be an Applicative, but this is not actually required, for historical reasons (i.e., because Monad has been around longer). This is not the only peculiarity of Monad, either.
Consider the actual definitions of the relevant type classes, taken from the base package's source on Hackage.
Here's Applicative:
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
(*>) :: f a -> f b -> f b
(<*) :: f a -> f b -> f a
...about which we can observe the following:
The context is correct given currently existing type classes, i.e., it requires Functor.
It's defined in terms of function application, rather than in (possibly more natural from a mathematical standpoint) terms of lifting tuples.
It includes technically superfluous operators equivalent to lifting constant functions.
Meanwhile, here's Monad:
class Monad m where
(>>=) :: m a -> (a -> m b) -> m b
(>>) :: m a -> m b -> m b
return :: a -> m a
fail :: String -> m a
...about which we can observe the following:
The context not only ignores Applicative, but also Functor, both of which are logically implied by Monad but not explicitly required.
It's also defined in terms of function application, rather than the more mathematically natural definition using return and join.
It includes a technically superfluous operator equivalent to lifting a constant function.
It also includes fail which doesn't really fit in at all.
In general, the ways that the Monad type class differs from the mathematical concept it's based on can be traced back through its history as an abstraction for programming. Some, like the function application bias it shares with Applicative, are a reflection of existing in a functional language; others, like fail or the lack of an appropriate class context, are historical accidents more than anything else.
What it all comes down to is that having an instance of Monad implies an instance for Applicative, which in turn implies an instance for Functor. A class context merely formalizes this explicitly; it remains true regardless. As it stands, given a Monad instance, both Functor and Applicative can be defined in a completely generic way. Applicative is "less powerful" than Monad in exactly the same sense that it is more general: Any Monad is automatically Applicative if you copy+paste the generalized instance, but there exist Applicative instances which cannot be defined as a Monad.
A class context, like Functor f => Applicative f says two things: That the latter implies the former, and that a definition must exist to fulfill that implication. In many cases, defining the latter implicitly defines the former anyway, but the compiler cannot deduce that in general, and thus requires both instances to be written out explicitly. The same thing can be observed with Eq and Ord--the latter obviously implies the former, but you still need to define an Eq instance in order to define one for Ord.
The IO type is abstract in Haskell, so if you want to implement a general Applicative for IO you have to do it with the operations that are supported by IO. Since you can implement Applicative in terms of the Monad operations that seems like a good choice. Can you think of another way to implement it?
And yes, Applicative is in some sense less powerful than Monad.
Isn't Applicative a less powerful concept than Monad?
Yes, and therefore whenever you have a Monad you can always make it an Applicative. You could replace IO with any other monad in your example and it would be a valid Applicative instance.
As an analogy, while a color printer may be considered more powerful than a grayscale printer, you can still use one to print a grayscale image.
Of course, one could also base a Monad instance on an Applicative and set return = pure, but you won't be able to define >>= generally. This is what Monad being more powerful means.
In a perfect world every Monad would be an Applicative (so we had class Applicative a => Monad a where ...), but for historical reasons both type classes are independend. So your observation that this definition is kind of "backwards" (using the more powerful abstaction to implement the less powerful one) is correct.
You already have perfectly good answers for older versions of GHC, but in the latest version you actually do have class Applicative m => Monad m so your question needs another answer.
In terms of GHC implementation: GHC just checks what instances are defined for a given type before it tries to compile any of them.
In terms of code semantics: class Applicative m => Monad m doesn't mean the Applicative instance has to be defined "first", just that if it hasn't been defined by the end of your program then the compiler will abort.

Resources