What does Haskell's <|> operator do? - haskell

Going through Haskell's documentation is always a bit of a pain for me, because all the information you get about a function is often nothing more than just: f a -> f [a] which could mean any number of things.
As is the case of the <|> function.
All I'm given is this: (<|>) :: f a -> f a -> f a and that it's an "associative binary operation"...
Upon inspection of Control.Applicative I learn that it does seemingly unrelated things depending on implementation.
instance Alternative Maybe where
empty = Nothing
Nothing <|> r = r
l <|> _ = l
Ok, so it returns right if there is no left, otherwise it returns left, gotcha.. This leads me to believe it's a "left or right" operator, which kinda makes sense given its use of | and |'s historical use as "OR"
instance Alternative [] where
empty = []
(<|>) = (++)
Except here it just calls list's concatenation operator... Breaking my idea down...
So what exactly is that function? What's its use? Where does it fit in in the grand scheme of things?

Typically it means "choice" or "parallel" in that a <|> b is either a "choice" of a or b or a and b done in parallel. But let's back up.
Really, there is no practical meaning to operations in typeclasses like (<*>) or (<|>). These operations are given meaning in two ways: (1) via laws and (2) via instantiations. If we are not talking about a particular instance of Alternative then only (1) is available for intuiting meaning.
So "associative" means that a <|> (b <|> c) is the same as (a <|> b) <|> c. This is useful as it means that we only care about the sequence of things chained together with (<|>), not their "tree structure".
Other laws include identity with empty. In particular, a <|> empty = empty <|> a = a. In our intuition with "choice" or "parallel" these laws read as "a or (something impossible) must be a" or "a alongside (empty process) is just a". It indicates that empty is some kind of "failure mode" for an Alternative.
There are other laws with how (<|>)/empty interact with fmap (from Functor) or pure/(<*>) (from Applicative), but perhaps the best way to move forward in understanding the meaning of (<|>) is to examine a very common example of a type which instantiates Alternative: a Parser.
If x :: Parser A and y :: Parser B then (,) <$> x <*> y :: Parser (A, B) parses x and then y in sequence. In contrast, (fmap Left x) <|> (fmap Right y) parses either x or y, beginning with x, to try out both possible parses. In other words, it indicates a branch in your parse tree, a choice, or a parallel parsing universe.

(<|>) :: f a -> f a -> f a actually tells you quite a lot, even without considering the laws for Alternative.
It takes two f a values, and has to give one back. So it will have to combine or select from its inputs somehow. It's polymorphic in the type a, so it will be completely unable to inspect whatever values of type a might be inside an f a; this means it can't do the "combining" by combining a values, so it must to it purely in terms of whatever structure the type constructor f adds.
The name helps a bit too. Some sort of "OR" is indeed the vague concept the authors were trying to indicate with the name "Alternative" and the symbol "<|>".
Now if I've got two Maybe a values and I have to combine them, what can I do? If they're both Nothing I'll have to return Nothing, with no way to create an a. If at least one of them is a Just ... I can return one of my inputs as-is, or I can return Nothing. There are very few functions that are even possible with the type Maybe a -> Maybe a -> Maybe a, and for a class whose name is "Alternative" the one given is pretty reasonable and obvious.
How about combining two [a] values? There are more possible functions here, but really it's pretty obvious what this is likely to do. And the name "Alternative" does give you a good hint at what this is likely to be about provided you're familiar with the standard "nondeterminism" interpretation of the list monad/applicative; if you see a [a] as a "nondeterministic a" with a collection of possible values, then the obvious way for "combining two nondeterministic a values" in a way that might deserve the name "Alternative" is to produce a nondeterminstic a which could be any of the values from either of the inputs.
And for parsers; combining two parsers has two obvious broad interpretations that spring to mind; either you produce a parser that would match what the first does and then what the second does, or you produce a parser that matches either what the first does or what the second does (there are of course subtle details of each of these options that leave room for options). Given the name "Alternative", the "or" interpretation seems very natural for <|>.
So, seen from a sufficiently high level of abstraction, these operations do all "do the same thing". The type class is really for operating at that high level of abstraction where these things all "look the same". When I'm operating on a single known instance I just think of the <|> operation as exactly what it does for that specific type.

An interesting example of an Alternative that isn't a parser or a MonadPlus-like thing is Concurrently, a very useful type from the async package.
For Concurrently, empty is a computation that goes on forever. And (<|>) executes its arguments concurrently, returns the result of the first one that completes, and cancels the other one.

These seem very different, but consider:
Nothing <|> Nothing == Nothing
[] <|> [] == []
Just a <|> Nothing == Just a
[a] <|> [] == [a]
Nothing <|> Just b == Just b
[] <|> [b] == [b]
So... these are actually very, very similar, even if the implementation looks different. The only real difference is here:
Just a <|> Just b == Just a
[a] <|> [b] == [a, b]
A Maybe can only hold one value (or zero, but not any other amount). But hey, if they were both identical, why would you need two different types? The whole point of them being different is, you know, to be different.
In summary, the implementation may look totally different, but these are actually quite similar.

Related

Pattern match over non-constructor functions

One of the most powerful ways pattern matching and lazy evaluation can come together is to bypass expensive computation. However I am still shocked that Haskell only permits the pattern matching of constructors, which is barely pattern matching at all!
Is there some way to impliment the following functionality in Haskell:
exp :: Double -> Double
exp 0 = 1
exp (log a) = a
--...
log :: Double -> Double
log 1 = 0
log (exp a) = a
--...
The original problem I found this useful in was writing an associativity preference / rule in a Monoid class:
class Monoid m where
iden :: m
(+) m -> m -> m
(+) iden a = a
(+) a iden = a
--Line with issue
(+) ((+) a b) c = (+) a ((+) b c)
There's no reason to be shocked about this. How would it be even remotely feasible to pattern match on arbitrary functions? Most functions aren't invertible, and even for those that are it is typically nontrivial to actually compute the inverses.
Of course the compiler could in principle handle trivial examples like replacing literal exp (log x) with x, but that would be almost completely useless in practice (in the unlikely event somebody were to literally write that, they could as well reduce it right there in the source), and would generally lead to very weird unpredictable behaviour if inlining order changes whether or not the compiler can see that a match applies.
(There is however a thing called rewrite rules, which is similar to what you proposed but is seen as only an optimisation tool.)
Even the two lines from the Monoid class that don't error don't make sense, but for different reasons. First, when you write
(+) iden a = a
(+) a iden = a
this doesn't do what you seem to think. These are actually two redundant catch-call clauses, equivalent to
(+) x y = y
(+) x y = x
...which is an utterly nonsensical thing to write. What you want to state could in fact be written as
default (+) :: Eq a => a -> a -> a
x+y
| x==iden = y
| y==iden = x
| otherwise = ...
...but this still doesn't accomplish anything useful, because this is never going to be a full definition of +. And as soon as a concrete instance even begins to define its own + operator, the complete default one is going to be ignored.
Moreover, if you were to have these kind of clauses all over your Haskell project it would in practice just mean your performing a lot of unnecessary, redundant extra checks. A law-abiding Monoid instance needs to fulfill mempty <> a ≡ a anyway, no point explicitly special-casing it.
I think what you really want is tests. It would make sense to specify laws right in a class declaration in a way that they could automatically be checked, but standard Haskell has no syntax for this. Most projects just do it in a separate test suite, using QuickCheck to generate example inputs. I think there's also a tool that allow you to put the test cases right in your source file, but I forgot what it's called.

Why Do We Need Maybe Monad Over Either Monad

I was playing around Maybe and Either monad types (Chaining, applying conditional functions according to returned value, also returning error message which chained function has failed etc.). So it seemes to me like we can achieve same and more things that Maybe does by using Either monad. So my question is where the practical or conceptual difference between those ?
You are of course right that Maybe a is isomorphic to Either Unit a. The thing is that they are often semantically used to denote different things, a bit like the difference between returning null and throwing a NoSuchElementException:
Nothing/None denotes the "expected" missing of something, while
Left e denotes an error in getting it, for whatever reason.
That said, we might even combine the two to something like:
query :: Either DBError (Maybe String)
where we express both the possibility of a missing value (a DB NULL) and an error in the connection, the DBMS, or whatever (not saying that there aren't better designs, but you get the point).
Sometimes, the border is fluid; for saveHead :: [a] -> Maybe a, we could say that the expected possibility of the error is encoded in the intent of the function, while something like saveDivide might be encoded as Float -> Float -> Either FPError Float or Float -> Float -> Maybe Float, depending on the use case (again, just some stupid examples...).
If in doubt, the best option is probably to use a custom result ADT with semantic encoding (like data QueryResult = Success String | Null | Failure DBError), and to prefer Maybe to simple cases where it is "traditionally expected" (a subjective point, which however will be mostly OK if you gain experience).
#phg's answer is great. I will chime in with something that helped clear it up for me when I was learning them:
Maybe is one (value) or none – ie, you have a value or you have nothing
Either is a logical disjunction, but you always have at least one (value) - ie, you have one or the other, but not both.
Maybe is great for things like where you may or may not have a value - for example looking for an item in a list. if the list contains it, we get (Just x) otherwise we get Nothing
Either is the perfect representation of a branch in your code - it's going to go one way or the other; Left or Right. We use a mnemonic to remember it: Right is the right (correct) way; Left is the wrong way (Error). This is not it's only use of course, but definitely the most common.
I know the differences might seem subtle at first, but really they're suitable for very different things.
Well, you see, we can put this to the extreme by saying that all product types can be represented by just 2-tuples and all non-recursive sum types by Either. To additionally represent recursive types, we need a fixpoint type.
For example, why have 4-tuples (a,b,c,d) when we could as well write (a, (b, (c,d))) or (((a,b), c), d) ?
Or why have lists, when the following works as well?
data Y f = Y (f (Y f))
type List a = Y ((,) (Either () a))
nil = Y (Left (), undefined)
cons a as = Y (Right a, as)
infixr 4 cons
numbers = 1 `cons` 2 `cons` 3 `cons` nil
-- this is like foldl
reduce f z (Y (Left (), _)) = z
reduce f z (Y (Right x, xs)) = reduce f (f z x) xs
total = reduce (+) 0 numbers

Why do we need monads?

In my humble opinion the answers to the famous question "What is a monad?", especially the most voted ones, try to explain what is a monad without clearly explaining why monads are really necessary. Can they be explained as the solution to a problem?
Why do we need monads?
We want to program only using functions. ("functional programming (FP)" after all).
Then, we have a first big problem. This is a program:
f(x) = 2 * x
g(x,y) = x / y
How can we say what is to be executed first? How can we form an ordered sequence of functions (i.e. a program) using no more than functions?
Solution: compose functions. If you want first g and then f, just write f(g(x,y)). This way, "the program" is a function as well: main = f(g(x,y)). OK, but ...
More problems: some functions might fail (i.e. g(2,0), divide by 0). We have no "exceptions" in FP (an exception is not a function). How do we solve it?
Solution: Let's allow functions to return two kind of things: instead of having g : Real,Real -> Real (function from two reals into a real), let's allow g : Real,Real -> Real | Nothing (function from two reals into (real or nothing)).
But functions should (to be simpler) return only one thing.
Solution: let's create a new type of data to be returned, a "boxing type" that encloses maybe a real or be simply nothing. Hence, we can have g : Real,Real -> Maybe Real. OK, but ...
What happens now to f(g(x,y))? f is not ready to consume a Maybe Real. And, we don't want to change every function we could connect with g to consume a Maybe Real.
Solution: let's have a special function to "connect"/"compose"/"link" functions. That way, we can, behind the scenes, adapt the output of one function to feed the following one.
In our case: g >>= f (connect/compose g to f). We want >>= to get g's output, inspect it and, in case it is Nothing just don't call f and return Nothing; or on the contrary, extract the boxed Real and feed f with it. (This algorithm is just the implementation of >>= for the Maybe type). Also note that >>= must be written only once per "boxing type" (different box, different adapting algorithm).
Many other problems arise which can be solved using this same pattern: 1. Use a "box" to codify/store different meanings/values, and have functions like g that return those "boxed values". 2. Have a composer/linker g >>= f to help connecting g's output to f's input, so we don't have to change any f at all.
Remarkable problems that can be solved using this technique are:
having a global state that every function in the sequence of functions ("the program") can share: solution StateMonad.
We don't like "impure functions": functions that yield different output for same input. Therefore, let's mark those functions, making them to return a tagged/boxed value: IO monad.
Total happiness!
The answer is, of course, "We don't". As with all abstractions, it isn't necessary.
Haskell does not need a monad abstraction. It isn't necessary for performing IO in a pure language. The IO type takes care of that just fine by itself. The existing monadic desugaring of do blocks could be replaced with desugaring to bindIO, returnIO, and failIO as defined in the GHC.Base module. (It's not a documented module on hackage, so I'll have to point at its source for documentation.) So no, there's no need for the monad abstraction.
So if it's not needed, why does it exist? Because it was found that many patterns of computation form monadic structures. Abstraction of a structure allows for writing code that works across all instances of that structure. To put it more concisely - code reuse.
In functional languages, the most powerful tool found for code reuse has been composition of functions. The good old (.) :: (b -> c) -> (a -> b) -> (a -> c) operator is exceedingly powerful. It makes it easy to write tiny functions and glue them together with minimal syntactic or semantic overhead.
But there are cases when the types don't work out quite right. What do you do when you have foo :: (b -> Maybe c) and bar :: (a -> Maybe b)? foo . bar doesn't typecheck, because b and Maybe b aren't the same type.
But... it's almost right. You just want a bit of leeway. You want to be able to treat Maybe b as if it were basically b. It's a poor idea to just flat-out treat them as the same type, though. That's more or less the same thing as null pointers, which Tony Hoare famously called the billion-dollar mistake. So if you can't treat them as the same type, maybe you can find a way to extend the composition mechanism (.) provides.
In that case, it's important to really examine the theory underlying (.). Fortunately, someone has already done this for us. It turns out that the combination of (.) and id form a mathematical construct known as a category. But there are other ways to form categories. A Kleisli category, for instance, allows the objects being composed to be augmented a bit. A Kleisli category for Maybe would consist of (.) :: (b -> Maybe c) -> (a -> Maybe b) -> (a -> Maybe c) and id :: a -> Maybe a. That is, the objects in the category augment the (->) with a Maybe, so (a -> b) becomes (a -> Maybe b).
And suddenly, we've extended the power of composition to things that the traditional (.) operation doesn't work on. This is a source of new abstraction power. Kleisli categories work with more types than just Maybe. They work with every type that can assemble a proper category, obeying the category laws.
Left identity: id . f = f
Right identity: f . id = f
Associativity: f . (g . h) = (f . g) . h
As long as you can prove that your type obeys those three laws, you can turn it into a Kleisli category. And what's the big deal about that? Well, it turns out that monads are exactly the same thing as Kleisli categories. Monad's return is the same as Kleisli id. Monad's (>>=) isn't identical to Kleisli (.), but it turns out to be very easy to write each in terms of the other. And the category laws are the same as the monad laws, when you translate them across the difference between (>>=) and (.).
So why go through all this bother? Why have a Monad abstraction in the language? As I alluded to above, it enables code reuse. It even enables code reuse along two different dimensions.
The first dimension of code reuse comes directly from the presence of the abstraction. You can write code that works across all instances of the abstraction. There's the entire monad-loops package consisting of loops that work with any instance of Monad.
The second dimension is indirect, but it follows from the existence of composition. When composition is easy, it's natural to write code in small, reusable chunks. This is the same way having the (.) operator for functions encourages writing small, reusable functions.
So why does the abstraction exist? Because it's proven to be a tool that enables more composition in code, resulting in creating reusable code and encouraging the creation of more reusable code. Code reuse is one of the holy grails of programming. The monad abstraction exists because it moves us a little bit towards that holy grail.
Benjamin Pierce said in TAPL
A type system can be regarded as calculating a kind of static
approximation to the run-time behaviours of the terms in a program.
That's why a language equipped with a powerful type system is strictly more expressive, than a poorly typed language. You can think about monads in the same way.
As #Carl and sigfpe point, you can equip a datatype with all operations you want without resorting to monads, typeclasses or whatever other abstract stuff. However monads allow you not only to write reusable code, but also to abstract away all redundant detailes.
As an example, let's say we want to filter a list. The simplest way is to use the filter function: filter (> 3) [1..10], which equals [4,5,6,7,8,9,10].
A slightly more complicated version of filter, that also passes an accumulator from left to right, is
swap (x, y) = (y, x)
(.*) = (.) . (.)
filterAccum :: (a -> b -> (Bool, a)) -> a -> [b] -> [b]
filterAccum f a xs = [x | (x, True) <- zip xs $ snd $ mapAccumL (swap .* f) a xs]
To get all i, such that i <= 10, sum [1..i] > 4, sum [1..i] < 25, we can write
filterAccum (\a x -> let a' = a + x in (a' > 4 && a' < 25, a')) 0 [1..10]
which equals [3,4,5,6].
Or we can redefine the nub function, that removes duplicate elements from a list, in terms of filterAccum:
nub' = filterAccum (\a x -> (x `notElem` a, x:a)) []
nub' [1,2,4,5,4,3,1,8,9,4] equals [1,2,4,5,3,8,9]. A list is passed as an accumulator here. The code works, because it's possible to leave the list monad, so the whole computation stays pure (notElem doesn't use >>= actually, but it could). However it's not possible to safely leave the IO monad (i.e. you cannot execute an IO action and return a pure value — the value always will be wrapped in the IO monad). Another example is mutable arrays: after you have leaved the ST monad, where a mutable array live, you cannot update the array in constant time anymore. So we need a monadic filtering from the Control.Monad module:
filterM :: (Monad m) => (a -> m Bool) -> [a] -> m [a]
filterM _ [] = return []
filterM p (x:xs) = do
flg <- p x
ys <- filterM p xs
return (if flg then x:ys else ys)
filterM executes a monadic action for all elements from a list, yielding elements, for which the monadic action returns True.
A filtering example with an array:
nub' xs = runST $ do
arr <- newArray (1, 9) True :: ST s (STUArray s Int Bool)
let p i = readArray arr i <* writeArray arr i False
filterM p xs
main = print $ nub' [1,2,4,5,4,3,1,8,9,4]
prints [1,2,4,5,3,8,9] as expected.
And a version with the IO monad, which asks what elements to return:
main = filterM p [1,2,4,5] >>= print where
p i = putStrLn ("return " ++ show i ++ "?") *> readLn
E.g.
return 1? -- output
True -- input
return 2?
False
return 4?
False
return 5?
True
[1,5] -- output
And as a final illustration, filterAccum can be defined in terms of filterM:
filterAccum f a xs = evalState (filterM (state . flip f) xs) a
with the StateT monad, that is used under the hood, being just an ordinary datatype.
This example illustrates, that monads not only allow you to abstract computational context and write clean reusable code (due to the composability of monads, as #Carl explains), but also to treat user-defined datatypes and built-in primitives uniformly.
I don't think IO should be seen as a particularly outstanding monad, but it's certainly one of the more astounding ones for beginners, so I'll use it for my explanation.
Naïvely building an IO system for Haskell
The simplest conceivable IO system for a purely-functional language (and in fact the one Haskell started out with) is this:
main₀ :: String -> String
main₀ _ = "Hello World"
With lazyness, that simple signature is enough to actually build interactive terminal programs – very limited, though. Most frustrating is that we can only output text. What if we added some more exciting output possibilities?
data Output = TxtOutput String
| Beep Frequency
main₁ :: String -> [Output]
main₁ _ = [ TxtOutput "Hello World"
-- , Beep 440 -- for debugging
]
cute, but of course a much more realistic “alterative output” would be writing to a file. But then you'd also want some way to read from files. Any chance?
Well, when we take our main₁ program and simply pipe a file to the process (using operating system facilities), we have essentially implemented file-reading. If we could trigger that file-reading from within the Haskell language...
readFile :: Filepath -> (String -> [Output]) -> [Output]
This would use an “interactive program” String->[Output], feed it a string obtained from a file, and yield a non-interactive program that simply executes the given one.
There's one problem here: we don't really have a notion of when the file is read. The [Output] list sure gives a nice order to the outputs, but we don't get an order for when the inputs will be done.
Solution: make input-events also items in the list of things to do.
data IO₀ = TxtOut String
| TxtIn (String -> [Output])
| FileWrite FilePath String
| FileRead FilePath (String -> [Output])
| Beep Double
main₂ :: String -> [IO₀]
main₂ _ = [ FileRead "/dev/null" $ \_ ->
[TxtOutput "Hello World"]
]
Ok, now you may spot an imbalance: you can read a file and make output dependent on it, but you can't use the file contents to decide to e.g. also read another file. Obvious solution: make the result of the input-events also something of type IO, not just Output. That sure includes simple text output, but also allows reading additional files etc..
data IO₁ = TxtOut String
| TxtIn (String -> [IO₁])
| FileWrite FilePath String
| FileRead FilePath (String -> [IO₁])
| Beep Double
main₃ :: String -> [IO₁]
main₃ _ = [ TxtIn $ \_ ->
[TxtOut "Hello World"]
]
That would now actually allow you to express any file operation you might want in a program (though perhaps not with good performance), but it's somewhat overcomplicated:
main₃ yields a whole list of actions. Why don't we simply use the signature :: IO₁, which has this as a special case?
The lists don't really give a reliable overview of program flow anymore: most subsequent computations will only be “announced” as the result of some input operation. So we might as well ditch the list structure, and simply cons a “and then do” to each output operation.
data IO₂ = TxtOut String IO₂
| TxtIn (String -> IO₂)
| Terminate
main₄ :: IO₂
main₄ = TxtIn $ \_ ->
TxtOut "Hello World"
Terminate
Not too bad!
So what has all of this to do with monads?
In practice, you wouldn't want to use plain constructors to define all your programs. There would need to be a good couple of such fundamental constructors, yet for most higher-level stuff we would like to write a function with some nice high-level signature. It turns out most of these would look quite similar: accept some kind of meaningfully-typed value, and yield an IO action as the result.
getTime :: (UTCTime -> IO₂) -> IO₂
randomRIO :: Random r => (r,r) -> (r -> IO₂) -> IO₂
findFile :: RegEx -> (Maybe FilePath -> IO₂) -> IO₂
There's evidently a pattern here, and we'd better write it as
type IO₃ a = (a -> IO₂) -> IO₂ -- If this reminds you of continuation-passing
-- style, you're right.
getTime :: IO₃ UTCTime
randomRIO :: Random r => (r,r) -> IO₃ r
findFile :: RegEx -> IO₃ (Maybe FilePath)
Now that starts to look familiar, but we're still only dealing with thinly-disguised plain functions under the hood, and that's risky: each “value-action” has the responsibility of actually passing on the resulting action of any contained function (else the control flow of the entire program is easily disrupted by one ill-behaved action in the middle). We'd better make that requirement explicit. Well, it turns out those are the monad laws, though I'm not sure we can really formulate them without the standard bind/join operators.
At any rate, we've now reached a formulation of IO that has a proper monad instance:
data IO₄ a = TxtOut String (IO₄ a)
| TxtIn (String -> IO₄ a)
| TerminateWith a
txtOut :: String -> IO₄ ()
txtOut s = TxtOut s $ TerminateWith ()
txtIn :: IO₄ String
txtIn = TxtIn $ TerminateWith
instance Functor IO₄ where
fmap f (TerminateWith a) = TerminateWith $ f a
fmap f (TxtIn g) = TxtIn $ fmap f . g
fmap f (TxtOut s c) = TxtOut s $ fmap f c
instance Applicative IO₄ where
pure = TerminateWith
(<*>) = ap
instance Monad IO₄ where
TerminateWith x >>= f = f x
TxtOut s c >>= f = TxtOut s $ c >>= f
TxtIn g >>= f = TxtIn $ (>>=f) . g
Obviously this is not an efficient implementation of IO, but it's in principle usable.
Monads serve basically to compose functions together in a chain. Period.
Now the way they compose differs across the existing monads, thus resulting in different behaviors (e.g., to simulate mutable state in the state monad).
The confusion about monads is that being so general, i.e., a mechanism to compose functions, they can be used for many things, thus leading people to believe that monads are about state, about IO, etc, when they are only about "composing functions".
Now, one interesting thing about monads, is that the result of the composition is always of type "M a", that is, a value inside an envelope tagged with "M". This feature happens to be really nice to implement, for example, a clear separation between pure from impure code: declare all impure actions as functions of type "IO a" and provide no function, when defining the IO monad, to take out the "a" value from inside the "IO a". The result is that no function can be pure and at the same time take out a value from an "IO a", because there is no way to take such value while staying pure (the function must be inside the "IO" monad to use such value). (NOTE: well, nothing is perfect, so the "IO straitjacket" can be broken using "unsafePerformIO : IO a -> a" thus polluting what was supposed to be a pure function, but this should be used very sparingly and when you really know to be not introducing any impure code with side-effects.
Monads are just a convenient framework for solving a class of recurring problems. First, monads must be functors (i.e. must support mapping without looking at the elements (or their type)), they must also bring a binding (or chaining) operation and a way to create a monadic value from an element type (return). Finally, bind and return must satisfy two equations (left and right identities), also called the monad laws. (Alternatively one could define monads to have a flattening operation instead of binding.)
The list monad is commonly used to deal with non-determinism. The bind operation selects one element of the list (intuitively all of them in parallel worlds), lets the programmer to do some computation with them, and then combines the results in all worlds to single list (by concatenating, or flattening, a nested list). Here is how one would define a permutation function in the monadic framework of Haskell:
perm [e] = [[e]]
perm l = do (leader, index) <- zip l [0 :: Int ..]
let shortened = take index l ++ drop (index + 1) l
trailer <- perm shortened
return (leader : trailer)
Here is an example repl session:
*Main> perm "a"
["a"]
*Main> perm "ab"
["ab","ba"]
*Main> perm ""
[]
*Main> perm "abc"
["abc","acb","bac","bca","cab","cba"]
It should be noted that the list monad is in no way a side effecting computation. A mathematical structure being a monad (i.e. conforming to the above mentioned interfaces and laws) does not imply side effects, though side-effecting phenomena often nicely fit into the monadic framework.
You need monads if you have a type constructor and functions that returns values of that type family. Eventually, you would like to combine these kind of functions together. These are the three key elements to answer why.
Let me elaborate. You have Int, String and Real and functions of type Int -> String, String -> Real and so on. You can combine these functions easily, ending with Int -> Real. Life is good.
Then, one day, you need to create a new family of types. It could be because you need to consider the possibility of returning no value (Maybe), returning an error (Either), multiple results (List) and so on.
Notice that Maybe is a type constructor. It takes a type, like Int and returns a new type Maybe Int. First thing to remember, no type constructor, no monad.
Of course, you want to use your type constructor in your code, and soon you end with functions like Int -> Maybe String and String -> Maybe Float. Now, you can't easily combine your functions. Life is not good anymore.
And here's when monads come to the rescue. They allow you to combine that kind of functions again. You just need to change the composition . for >==.
Why do we need monadic types?
Since it was the quandary of I/O and its observable effects in nonstrict languages like Haskell that brought the monadic interface to such prominence:
[...] monads are used to address the more general problem of computations (involving state, input/output, backtracking, ...) returning values: they do not solve any input/output-problems directly but rather provide an elegant and flexible abstraction of many solutions to related problems. [...] For instance, no less than three different input/output-schemes are used to solve these basic problems in Imperative functional programming, the paper which originally proposed `a new model, based on monads, for performing input/output in a non-strict, purely functional language'. [...]
[Such] input/output-schemes merely provide frameworks in which side-effecting operations can safely be used with a guaranteed order of execution and without affecting the properties of the purely functional parts of the language.
Claus Reinke (pages 96-97 of 210).
(emphasis by me.)
[...] When we write effectful code – monads or no monads – we have to constantly keep in mind the context of expressions we pass around.
The fact that monadic code ‘desugars’ (is implementable in terms of) side-effect-free code is irrelevant. When we use monadic notation, we program within that notation – without considering what this notation desugars into. Thinking of the desugared code breaks the monadic abstraction. A side-effect-free, applicative code is normally compiled to (that is, desugars into) C or machine code. If the desugaring argument has any force, it may be applied just as well to the applicative code, leading to the conclusion that it all boils down to the machine code and hence all programming is imperative.
[...] From the personal experience, I have noticed that the mistakes I make when writing monadic code are exactly the mistakes I made when programming in C. Actually, monadic mistakes tend to be worse, because monadic notation (compared to that of a typical imperative language) is ungainly and obscuring.
Oleg Kiselyov (page 21 of 26).
The most difficult construct for students to understand is the monad. I introduce IO without mentioning monads.
Olaf Chitil.
More generally:
Still, today, over 25 years after the introduction of the concept of monads to the world of functional programming, beginning functional programmers struggle to grasp the concept of monads. This struggle is exemplified by the numerous blog posts about the effort of trying to learn about monads. From our own experience we notice that even at university level, bachelor level students often struggle to comprehend monads and consistently score poorly on monad-related exam questions.
Considering that the concept of monads is not likely to disappear from the functional programming landscape any time soon, it is vital that we, as the functional programming community, somehow overcome the problems novices encounter when first studying monads.
Tim Steenvoorden, Jurriën Stutterheim, Erik Barendsen and Rinus Plasmeijer.
If only there was another way to specify "a guaranteed order of execution" in Haskell, while keeping the ability to separate regular Haskell definitions from those involved in I/O (and its observable effects) - translating this variation of Philip Wadler's echo:
val echoML : unit -> unit
fun echoML () = let val c = getcML () in
if c = #"\n" then
()
else
let val _ = putcML c in
echoML ()
end
fun putcML c = TextIO.output1(TextIO.stdOut,c);
fun getcML () = valOf(TextIO.input1(TextIO.stdIn));
...could then be as simple as:
echo :: OI -> ()
echo u = let !(u1:u2:u3:_) = partsOI u in
let !c = getChar u1 in
if c == '\n' then
()
else
let !_ = putChar c u2 in
echo u3
where:
data OI -- abstract
foreign import ccall "primPartOI" partOI :: OI -> (OI, OI)
⋮
foreign import ccall "primGetCharOI" getChar :: OI -> Char
foreign import ccall "primPutCharOI" putChar :: Char -> OI -> ()
⋮
and:
partsOI :: OI -> [OI]
partsOI u = let !(u1, u2) = partOI u in u1 : partsOI u2
How would this work? At run-time, Main.main receives an initial OI pseudo-data value as an argument:
module Main(main) where
main :: OI -> ()
⋮
...from which other OI values are produced, using partOI or partsOI. All you have to do is ensure each new OI value is used at most once, in each call to an OI-based definition, foreign or otherwise. In return, you get back a plain ordinary result - it isn't e.g. paired with some odd abstract state, or requires the use of a callback continuation, etc.
Using OI, instead of the unit type () like Standard ML does, means we can avoid always having to use the monadic interface:
Once you're in the IO monad, you're stuck there forever, and are reduced to Algol-style imperative programming.
Robert Harper.
But if you really do need it:
type IO a = OI -> a
unitIO :: a -> IO a
unitIO x = \ u -> let !_ = partOI u in x
bindIO :: IO a -> (a -> IO b) -> IO b
bindIO m k = \ u -> let !(u1, u2) = partOI u in
let !x = m u1 in
let !y = k x u2 in
y
⋮
So, monadic types aren't always needed - there are other interfaces out there:
LML had a fully fledged implementation of oracles running of a multi-processor (a Sequent Symmetry) back in ca 1989. The description in the Fudgets thesis refers to this implementation. It was fairly pleasant to work with and quite practical.
[...]
These days everything is done with monads so other solutions are sometimes forgotten.
Lennart Augustsson (2006).
Wait a moment: since it so closely resembles Standard ML's direct use of effects, is this approach and its use of pseudo-data referentially transparent?
Absolutely - just find a suitable definition of "referential transparency"; there's plenty to choose from...

About value in context (applied in Monad)

I have a small question about value in context.
Take Just 'a', so the value in context of type Maybe in this case is 'a'
Take [3], so value in context of type [a] in this case is 3
And if you apply the monad for [3] like this: [3] >>= \x -> [x+3], it means you assign x with value 3. It's ok.
But now, take [3,2], so what is the value in the context of type [a]?. And it's so strange that if you apply monad for it like this:
[3,4] >>= \x -> x+3
It got the correct answer [6,7], but actually we don't understand what is x in this case. You can answer, ah x is 3 and then 4, and x feeds the function 2 times and concat as Monad does: concat (map f xs) like this:
[3,4] >>= concat (map f x)
So in this case, [3,4] will be assigned to the x. It means wrong, because [3,4] is not a value. Monad is wrong.
I think your problem is focusing too much on the values. A monad is a type constructor, and as such not concerned with how many and what kinds of values there are, but only the context.
A Maybe a can be an a, or nothing. Easy, and you correctly observed that.
An Either String a is either some a, or alternatively some information in form of a String (e.g. why the calculation of a failed).
Finally, [a] is an unknown number of as (or none at all), that may have resulted from an ambiguous computation, or one giving multiple results (like a quadratic equation).
Now, for the interpretation of (>>=), it is helpful to know that the essential property of a monad (how it is defined by category theorists) is
join :: m (m a) -> m a.
Together with fmap, (>>=) can be written in terms of join.
What join means is the following: A context, put in the same context again, still has the same resulting behavior (for this monad).
This is quite obvious for Maybe (Maybe a): Something can essentially be Just (Just x), or Nothing, or Just Nothing, which provides the same information as Nothing. So, instead of using Maybe (Maybe a), you could just have Maybe a and you wouldn't lose any information. That's what join does: it converts to the "easier" context.
[[a]] is somehow more difficult, but not much. You essentially have multiple/ambiguous results out of multiple/ambiguous results. A good example are the roots of a fourth-degree polynomial, found by solving a quadratic equation. You first get two solutions, and out of each you can find two others, resulting in four roots.
But the point is, it doesn't matter if you speak of an ambiguous ambiguous result, or just an ambiguous result. You could just always use the context "ambiguous", and transform multiple levels with join.
And here comes what (>>=) does for lists: it applies ambiguous functions to ambiguous values:
squareRoots :: Complex -> [Complex]
fourthRoots num = squareRoots num >>= squareRoots
can be rewritten as
fourthRoots num = join $ squareRoots `fmap` (squareRoots num)
-- [1,-1,i,-i] <- [[1,-1],[i,-i]] <- [1,-1] <- 1
since all you have to do is to find all possible results for each possible value.
This is why join is concat for lists, and in fact
m >>= f == join (fmap f) m
must hold in any monad.
A similar interpretation can be given to IO. A computation with side-effects, which can also have side-effects (IO (IO a)), is in essence just something with side-effects.
You have to take the word "context" quite broadly.
A common way of interpreting a list of values is that it represents an indeterminate value, so [3,4] represents a value which is three or four, but we don't know which (perhaps we just know it's a solution of x^2 - 7x + 12 = 0).
If we then apply f to that, we know it's 6 or 7 but we still don't know which.
Another example of an indeterminate value that you're more used to is 3. It could mean 3::Int or 3::Integer or even sometimes 3.0::Double. It feels easier because there's only one symbol representing the indeterminate value, whereas in a list, all the possibilities are listed (!).
If you write
asum = do
x <- [10,20]
y <- [1,2]
return (x+y)
You'll get a list with four possible answers: [11,12,21,22]
That's one for each of the possible ways you could add x and y.
It is not the values that are in the context, it's the types.
Just 'a' :: Maybe Char --- Char is in a Maybe context.
[3, 2] :: [Int] --- Int is in a [] context.
Whether there is one, none or many of the a in the m a is beside the point.
Edit: Consider the type of (>>=) :: Monad m => m a -> (a -> m b) -> m b.
You give the example Just 3 >>= (\x->Just(4+x)). But consider Nothing >>= (\x->Just(4+x)). There is no value in the context. But the type is in the context all the same.
It doesn't make sense to think of x as necessarily being a single value. x has a single type. If we are dealing with the Identity monad, then x will be a single value, yes. If we are in the Maybe monad, x may be a single value, or it may never be a value at all. If we are in the list monad, x may be a single value, or not be a value at all, or be various different values... but what it is not is the list of all those different values.
Your other example --- [2, 3] >>= (\x -> x + 3) --- [2, 3] is not passed to the function. [2, 3] + 3 would have a type error. 2 is passed to the function. And so is 3. The function is invoked twice, gives results for both those inputs, and the results are combined by the >>= operator. [2, 3] is not passed to the function.
"context" is one of my favorite ways to think about monads. But you've got a slight misconception.
Take Just 'a', so the value in context of type Maybe in this case is 'a'
Not quite. You keep saying the value in context, but there is not always a value "inside" a context, or if there is, then it is not necessarily the only value. It all depends on which context we are talking about.
The Maybe context is the context of "nullability", or potential absence. There might be something there, or there might be Nothing. There is no value "inside" of Nothing. So the maybe context might have a value inside, or it might not. If I give you a Maybe Foo, then you cannot assume that there is a Foo. Rather, you must assume that it is a Foo inside the context where there might actually be Nothing instead. You might say that something of type Maybe Foo is a nullable Foo.
Take [3], so value in context of type [a] in this case is 3
Again, not quite right. A list represents a nondeterministic context. We're not quite sure what "the value" is supposed to be, or if there is one at all. In the case of a singleton list, such as [3], then yes, there is just one. But one way to think about the list [3,4] is as some unobservable value which we are not quite sure what it is, but we are certain that it 3 or that it is 4. You might say that something of type [Foo] is a nondeterministic Foo.
[3,4] >>= \x -> x+3
This is a type error; not quite sure what you meant by this.
So in this case, [3,4] will be assigned to the x. It means wrong, because [3,4] is not a value. Monad is wrong.
You totally lost me here. Each instance of Monad has its own implementation of >>= which defines the context that it represents. For lists, the definition is
(xs >>= f) = (concat (map f xs))
You may want to learn about Functor and Applicative operations, which are related to the idea of Monad, and might help clear some confusion.

Why does the Applicative instance for Maybe give Nothing when function is Nothing in <*>

I am a beginner with haskell and am reading the Learn you a haskell book. I have been trying to digest functors and applicative functors for a while now.
In the applicative functors topic, the instance implementation for Maybe is given as
instance Applicative Maybe where
pure = Just
Nothing <*> _ = Nothing
(Just f) <*> something = fmap f something
So, as I understand it, we get Nothing if the left side functor (for <*>) is Nothing. To me, it seems to make more sense as
Nothing <*> something = something
So that this applicative functor has no effect. What is the usecase, if any for giving out Nothing?
Say, I have a Maybe String with me, whose value I don't know. I have to give this Maybe to a third party function, but want its result to go through a few Maybe (a -> b)'s first. If some of these functions are Nothing I'll want them to silently return their input, not give out a Nothing, which is loss of data.
So, what is the thinking behind returning Nothing in the above instance?
How would that work? Here's the type signature:
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
So the second argument here would be of type Maybe a, while the result needs to be of type Maybe b. You need some way to turn a into b, which you can only do if the first argument isn't Nothing.
The only way something like this would work is if you have one or more values of type Maybe (a -> a) and want to apply any that aren't Nothing. But that's much too specific for the general definition of (<*>).
Edit: Since it seems to be the Maybe (a -> a) scenario you actually care about, here's a couple examples of what you can do with a bunch of values of that type:
Keeping all the functions and discard the Nothings, then apply them:
applyJust :: [Maybe (a -> a)] -> a -> a
applyJust = foldr (.) id . catMaybes
The catMaybes function gives you a list containing only the Just values, then the foldr composes them all together, starting from the identity function (which is what you'll get if there are no functions to apply).
Alternatively, you can take functions until finding a Nothing, then bail out:
applyWhileJust :: [Maybe (a -> a)] -> a -> a
applyWhileJust (Just f:fs) = f . applyWhileJust fs
applyWhileJust (Nothing:_) = id
This uses a similar idea as the above, except that when it finds Nothing it ignores the rest of the list. If you like, you can also write it as applyWhileJust = foldr (maybe (const id) (.)) id but that's a little harder to read...
Think of the <*> as the normal * operator. a * 0 == 0, right? It doesn't matter what a is. So using the same logic, Just (const a) <*> Nothing == Nothing. The Applicative laws dictate that a data type has to behave like this.
The reason why this is useful, is that Maybe is supposed to represent the presence of something, not the absence of something. If you pipeline a Maybe value through a chain of functions, if one function fails, it means that a failure happened, and that the process needs to be aborted.
The behavior you propose is impractical, because there are numerous problems with it:
If a failed function is to return its input, it has to have type a -> a, because the returned value and the input value have to have the same type for them to be interchangeable depending on the outcome of the function
According to your logic, what happens if you have Just (const 2) <*> Just 5? How can the behavior in this case be made consistent with the Nothing case?
See also the Applicative laws.
EDIT: fixed code typos, and again
Well what about this?
Just id <*> Just something
The usecase for Nothing comes when you start using <*> to plumb through functions with multiple inputs.
(-) <$> readInt "foo" <*> readInt "3"
Assuming you have a function readInt :: String -> Maybe Int, this will turn into:
(-) <$> Nothing <*> Just 3
<$> is just fmap, and fmap f Nothing is Nothing, so it reduces to:
Nothing <*> Just 3
Can you see now why this should produce Nothing? The original meaning of the expression was to subtract two numbers, but since we failed to produce a partially-applied function after the first input, we need to propagate that failure instead of just making up a nice function that has nothing to do with subtraction.
Additional to C. A. McCann's excellent answer I'd like to point out that this might be a case of a "theorem for free", see http://ttic.uchicago.edu/~dreyer/course/papers/wadler.pdf . The gist of this paper is that for some polymorphic functions there is only one possible implementiation for a given type signature, e.g. fst :: (a,b) -> a has no other choice than returning the first element of the pair (or be undefined), and this can be proven. This property may seem counter-intuitive but is rooted in the very limited information a function has about its polymorphic arguments (especially it can't create one out of thin air).

Resources