non-lawful Monoid instances for building up AST not considered harmful? - haskell

I've seen a data type defined like the following with a corresponding Monoid instance:
data Foo where
FooEmpty :: String -> Foo
FooAppend :: Foo -> Foo -> Foo
-- | Create a 'Foo' with a specific 'String'.
foo :: String -> Foo
foo = FooEmpty
instance Monoid Foo where
mempty :: Foo
mempty = FooEmpty ""
mappend :: Foo -> Foo -> Foo
mappend = FooAppend
You can find the full code in a gist on Github.
This is how Foo can be used:
exampleFoo :: Foo
exampleFoo =
(foo "hello" <> foo " reallylongstringthatislong") <>
(foo " world" <> mempty)
exampleFoo ends up as a tree that looks like this:
FooAppend
(FooAppend
(FooEmpty "hello")
(FooEmpty " reallylongstringthatislong"))
(FooAppend
(FooEmpty " world")
(FooEmpty ""))
Foo can be used to turn sequences of Monoid operations (mempty and mappend) into an AST. This AST can then be interpreted into some other Monoid.
For instance, here is a translation of Foo into a String that makes sure the string appends will happen optimally:
fooInterp :: Foo -> String
fooInterp = go ""
where
go :: String -> Foo -> String
go accum (FooEmpty str) = str ++ accum
go accum (FooAppend foo1 foo2) = go (go accum foo2) foo1
This is really nice. It is convenient that we can be sure String appends will happen in the right order. We don't have to worry about left-associated mappends.
However, the one thing that worries me is that the Monoid instance for Foo is not a legal Monoid instance.
For instance, take the first Monoid law:
mappend mempty x = x
If we let x be FooEmpty "hello", we get the following:
mappend mempty (FooEmpty "hello") = FooEmpty "hello"
mappend (FooEmpty "") (FooEmpty "hello") = FooEmpty "hello" -- replace mempty with its def
FooAppend (FooEmpty "") (FooEmpty "hello") = FooEmpty "hello" -- replace mappend with its def
You can see that FooAppend (FooEmpty "") (FooEmpty "hello") does not equal FooEmpty "hello". The other Monoid laws also don't hold for similar reasons.
Haskellers are usually against non-lawful instances. But I feel like this is a special case. We are just trying to build up a structure that can be interpreted into another Monoid. In the case of Foo, we can make sure that the Monoid laws hold for String in the fooInterp function.
Is it ever okay to use these types of non-lawful instances to build up an AST?
Are there any specific problems that need to be watched for when using these types of non-lawful instances?
Is there an alternative way to write code that uses something like Foo? Some way to enable interpretation of a monoidal structure instead of using mappend on a type directly?

Quoting this answer on a similar question:
You can think of it from this alternative point of view: the law (a <> b) <> c = a <> (b <> c) doesn't specify which equality should be used, i.e. what specific relation the = denotes. It is natural to think of it in terms of structural equality, but note that very few typeclass laws actually hold up to structural equality (e.g. try proving fmap id = id for [] as opposed to forall x . fmap id x = id x).
For example, it's mostly fine if you do not export the constructors of Foo, and only export functions that, from the point of view of users, behave as if Foo were a monoid. But most of the time it is possible to come up with a representation that's structurally a monoid, good enough in practice, though maybe not as general (below, you cannot reassociate arbitrarily after the fact, because interpretation is mixed with construction).
type Foo = Endo String
foo :: String -> Foo
foo s = Endo (s <>)
unFoo :: Foo -> String
unFoo (Endo f) = f ""
(Data.Monoid.Endo)
Here is another SO question where a non-structural structure (Alternative) is considered at first.

This will come up for most non-trivial data structures. The only exceptions I can think of off the top of my head are (some) trie-like structures.
Balanced tree data structures allow multiple balancings of most values. This is true of AVL trees, red-black trees, B-trees, 2-3 finger trees, etc.
Data structures designed around "rebuilding", such as Hood-Melville queues, allow variable amounts of duplication within structures representing most values.
Data structures implementing efficient priority queues allow multiple arrangements of elements.
Hash tables will arrange elements differently depending on when collisions occur.
None of these structures can be asymptotically as efficient without this flexibility. The flexibility, however, always breaks laws under the strictest interpretation. In Haskell, the only good way to deal with this is by using the module system to make sure no one can detect the problem. In experimental dependently typed languages, researchers have been working on things like observational type theory and homotopy type theory to find better ways to talk about "equality", but that research is pretty far from becoming practical.

Is it ever okay to use these types of non-lawful instances to build up an AST?
This is a matter of opinion. (I'm firmly in the 'never ok' camp.)
Are there any specific problems that need to be watched for when using these types of non-lawful instances?
cognitive burden placed on potential users and future maintainers
potential bugs because we use the type in a place that makes assumptions based on the broken law(s)
edit to answer questions in comments:
Would you be able to come up with specific examples of how it raises the cognitive burden on users?
Imagine how annoyed you would be if someone did this in C:
// limit all while loops to 10 iterations
#define while(exp) for(int i = 0; (exp) && i < 10; ++i)
Now we have to keep track of the scope of this pseudo-while definition and its implications. It's a non-Haskell example, but I think the principle is the same. We shouldn't expect the semantics of while to be different in a particular source file just like we shouldn't expect the semantics of Monoid to be different for a particular data type.
When we say something is an X, then it should be a X because people understand the semantics of X. The principle here is don't create exceptions to well understood concepts.
I think the point of using lawful abstractions (like monoid) in the first place is to alleviate the need for programmers to learn and remember a myriad of different semantics. Thus, every exception we create undermines this goal. In fact, it makes it worse; we have to remember the abstraction and on top of that remember all the exceptions. (As an aside, I admire but pity those who learned English as a second language.)
Or how it can lead to potential bugs?
some library:
-- instances of this class must have property P
class AbidesByP where
...
-- foo relies on the property P
foo :: AbidesByP a => a -> Result
foo a = ...
my code:
data MyData = ...
-- note: AbidesByP's are suppose to have property P, but this one doesn't
instance AbidesByP MyData where
...
some other programmer (or me in a few months):
doSomethingWithMyData :: MyData -> SomeResult
doSomethingWithMyData x = let ...
...
...
r = foo x -- potential bug
...
...
in ...
Is there an alternative way to write code that uses something like Foo?
I'd probably just use the constructor to contruct:
(foo "hello" `FooAppend` foo " reallylongstringthatislong") `FooAppend` (foo " world" `FooAppend` foo "")
or make an operator:
(<++>) = FooAppend
(foo "hello" <++> foo " reallylongstringthatislong") <++> (foo " world" <++> foo "")

Related

What is the special status of the 2-tuple?

Reading Real World Haskell and the Typeclassopedia I get the impression that 2-tuples (a,b) can have very special roles in Haskell.
The first use that I came across is with lookup where we use a list of 2-tuples as a dictionary.
Then I also came across the fact that ((,) e) is an instance of a functor (but no other n-tuple), which makes sense for example in the above example of (key,value).
Now the most recent case - which is the one I actually want to ask about - is in chapter 4.3 of the Typeclassopedia. There it says that ((,) a) is an instance of Applicative if a is a monoid. When do you actually make use of that? What are applications where you use the Applicative instance of (a,b)?
There's nothing preventing us from writing an instance for triples or arbitrary n-tuples:
instance Functor ((,,) a b) where
fmap f (x,y,z) = (x,y,f z)
instance (Monoid a, Monoid b) => Applicative ((,,) a b) where
pure z = (mempty, mempty, z)
(a,b,f) <*> (x,y,z) = (a `mappend` x, b `mappend` y, f z)
Pairs aren't special therefore in the sense that you can implement the instance for any n-tuple. But since those instances have to be written somewhere, it begs the question how far we should go. For example, Monoid instances are defined up to 5-tuples. It's certainly possible to write them for 10-tuples, but we're just copying boiler-plate code at that point.
That being said, pairs are special since their collection provide a natural way to describe a relation. One example is a dictionary, which relates a term and its definition:
dictionary :: [(String, String)]
dictionary =
[("cat", "animal that likes strings; not Strings, though")
,("dog", "animal that likes you; yes you")
,("foo", "a strange word used by programmers in examples")
]
The other way to show that relation (if the first parts of all pairs would be unique) would be
partialDictionaryEntry :: String -> String
if we limit the input domain to "cat", "dog" and "foo", or
dictionaryEntry :: String -> Maybe String
which is exactly what \s -> lookup s dictionary would be. And with pairs you can model any other n-tuple:
(a,b,z) = ((a,b),z)
(a,b,c,z) = ((a,b,c),z) = (((a,b),c),z)
In that sense they are the smallest container that provides this functionality. We can build all other tuples types from pairs. Heck, theoretically we don't even need to write our Applicative ((,,) a b) instance, since it's already provided for (,) (a,b) due to the Monoid instance.
That being said, why does it even have a Applicative instance? It's the simplest Writer implementation:
log :: (Show a) => a -> (String, a)
log x = (show x ++ "\n", x)
fivePlusThree = (+) <$> log 3 <*> log 5
main = do
let (logs, result) = fivePlusThree
putStrLn logs
print result
3
5
8
This provides an easy way to add additional information to functions or values, although you will probably use Writer and its writer method, since they are more pleasant to use and provide a strict variant.

Parsing to Free Monads

Say I have the following free monad:
data ExampleF a
= Foo Int a
| Bar String (Int -> a)
deriving Functor
type Example = Free ExampleF -- this is the free monad want to discuss
I know how I can work with this monad, eg. I could write some nice helpers:
foo :: Int -> Example ()
foo i = liftF $ Foo i ()
bar :: String -> Example Int
bar s = liftF $ Bar s id
So I can write programs in haskell like:
fooThenBar :: Example Int
fooThenBar =
do
foo 10
bar "nice"
I know how to print it, interpret it, etc. But what about parsing it?
Would it be possible to write a parser that could parse arbitrary
programs like:
foo 12
bar nice
foo 11
foo 42
So I can store them, serialize them, use them in cli programs etc.
The problem I keep running into is that the type of the program depends on which program is being parsed. If the program ends with a foo it's of
type Example () if it ends with a bar it's of type Example Int.
I do not feel like writing parsers for every possible permutation (it's simple here because there are only two possibilities, but imagine we add
Baz Int (String -> a), Doo (Int -> a), Moz Int a, Foz String a, .... This get's tedious and error-prone).
Perhaps I'm solving the wrong problem?
Boilerplate
To run the above examples, you need to add this to the beginning of the file:
{-# LANGUAGE DeriveFunctor #-}
import Control.Monad.Free
import Text.ParserCombinators.Parsec
Note: I put up a gist containing this code.
Not every Example value can be represented on the page without reimplementing some portion of Haskell. For example, return putStrLn has a type of Example (String -> IO ()), but I don't think it makes sense to attempt to parse that sort of Example value out of a file.
So let's restrict ourselves to parsing the examples you've given, which consist only of calls to foo and bar sequenced with >> (that is, no variable bindings and no arbitrary computations)*. The Backus-Naur form for our grammar looks approximately like this:
<program> ::= "" | <expr> "\n" <program>
<expr> ::= "foo " <integer> | "bar " <string>
It's straightforward enough to parse our two types of expression...
type Parser = Parsec String ()
int :: Parser Int
int = fmap read (many1 digit)
parseFoo :: Parser (Example ())
parseFoo = string "foo " *> fmap foo int
parseBar :: Parser (Example Int)
parseBar = string "bar " *> fmap bar (many1 alphaNum)
... but how can we give a type to the composition of these two parsers?
parseExpr :: Parser (Example ???)
parseExpr = parseFoo <|> parseBar
parseFoo and parseBar have different types, so we can't compose them with <|> :: Alternative f => f a -> f a -> f a. Moreover, there's no way to know ahead of time which type the program we're given will be: as you point out, the type of the parsed program depends on the value of the input string. "Types depending on values" is called dependent types; Haskell doesn't feature a proper dependent type system, but it comes close enough for us to have a stab at making this example work.
Let's start by forcing the expressions on either side of <|> to have the same type. This involves erasing Example's type parameter using existential quantification.†
data Ex a = forall i. Wrap (a i)
parseExpr :: Parser (Ex Example)
parseExpr = fmap Wrap parseFoo <|> fmap Wrap parseBar
This typechecks, but the parser now returns an Example containing a value of an unknown type. A value of unknown type is of course useless - but we do know something about Example's parameter: it must be either () or Int because those are the return types of parseFoo and parseBar. Programming is about getting knowledge out of your brain and onto the page, so we're going to wrap up the Example value with a bit of GADT evidence which, when unwrapped, will tell you whether a was Int or ().
data Ty a where
IntTy :: Ty Int
UnitTy :: Ty ()
data (a :*: b) i = a i :&: b i
type Sig a b = Ex (a :*: b)
pattern Sig x y = Wrap (x :&: y)
parseExpr :: Parser (Sig Ty Example)
parseExpr = fmap (\x -> Sig UnitTy x) parseFoo <|>
fmap (\x -> Sig IntTy x) parseBar
Ty is (something like) a runtime "singleton" representative of Example's type parameter. When you pattern match on IntTy, you learn that a ~ Int; when you pattern match on UnitTy you learn that a ~ (). (Information can be made to flow the other way, from types to values, using classes.) :*:, the functor product, pairs up two type constructors ensuring that their parameters are equal; thus, pattern matching on the Ty tells you about its accompanying Example.
Sig is therefore called a dependent pair or sigma type - the type of the second component of the pair depends on the value of the first. This is a common technique: when you erase a type parameter by existential quantification, it usually pays to make it recoverable by bundling up a runtime representative of that parameter.
Note that this use of Sig is equivalent to Either (Example Int) (Example ()) - a sigma type is a sum, after all - but this version scales better when you're summing over a large (or possibly infinite) set.
Now it's easy to build our expression parser into a program parser. We just have to repeatedly apply the expression parser, and then manipulate the dependent pairs in the list.
parseProgram :: Parser (Sig Ty Example)
parseProgram = fmap (foldr1 combine) $ parseExpr `sepBy1` (char '\n')
where combine (Sig _ val) (Sig ty acc) = Sig ty (val >> acc)
The code I've shown you is not exemplary. It doesn't separate the concerns of parsing and typechecking. In production code I would modularise this design by first parsing the data into an untyped syntax tree - a separate data type which doesn't enforce the typing invariant - then transform that into a typed version by type-checking it. The dependent pair technique would still be necessary to give a type to the output of the type-checker, but it wouldn't be tangled up in the parser.
*If binding is not a requirement, have you thought about using a free applicative to represent your data?
†Ex and :*: are reusable bits of machinery which I lifted from the Hasochism paper
So, I worry that this is the same sort of premature abstraction that you see in object-oriented languages, getting in the way of things. For example, I am not 100% sure that you are using the structure of the free monad -- your helpers for example simply seem to use id and () in a rather boring way, in fact I'm not sure if your Int -> x is ever anything other than either Pure :: Int -> Free ExampleF Int or const (something :: Free ExampleF Int).
The free monad for a functor F can basically be described as a tree whose data is stored in leaves and whose branching factor is controlled by the recursion in each constructor of the functor F. So for example Free Identity has no branching, hence only one leaf, and thus has the same structure as the monad:
data MonoidalFree m x = MF m x deriving (Functor)
instance Monoid m => Monad (MonoidalFree m) where
return x = MF mempty x
MF m x >>= my_x = case my_x x of MF n y -> MF (mappend m n) y
In fact Free Identity is isomorphic to MonoidalFree (Sum Integer), the difference is just that instead of MF (Sum 3) "Hello" you see Free . Identity . Free . Identity . Free . Identity $ Pure "Hello" as the means of tracking this integer. On the other hand if you have data E x = L x | R x deriving (Functor) then you get a sort of "path" of Ls and Rs before you hit this one leaf, Free E is going to be isomorphic to MonoidalFree [Bool].
The reason I'm going through this is that when you combine Free with an Integer -> x functor, you get an infinitely branching tree, and when I'm looking through your code to figure out how you're actually using this tree, all I see is that you use the id function with it. As far as I can tell, that restricts the recursion to either have the form Free (Bar "string" Pure) or else Free (Bar "string" (const subExpression)), in which case the system would seem to reduce completely to the MonoidalFree [Either Int String] monad.
(At this point I should pause to ask: Is that correct as far as you know? Was this what was intended?)
Anyway. Aside from my problems with your premature abstraction, the specific problem that you're citing with your monad (you can't tell the difference between () and Int has a bunch of really complicated solutions, but one really easy one. The really easy solution is to yield a value of type Example (Either () Int) and if you have a () you can fmap Left onto it and if you have an Int you can fmap Right onto it.
Without a much better understanding of how you're using this thing over TCP/IP we can't recommend a better structure for you than the generic free monads that you seem to be finding -- in particular we'd need to know how you're planning on using the infinite-branching of Int -> x options in practice.

Why can't I use the type `Show a => [Something -> a]`?

I have a record type say
data Rec {
recNumber :: Int
, recName :: String
-- more fields of various types
}
And I want to write a toString function for Rec :
recToString :: Rec -> String
recToString r = intercalate "\t" $ map ($ r) fields
where fields = [show . recNumber, show . recName]
This works. fields has type [Rec -> String]. But I'm lazy and I would prefer writing
recToString r = intercalate "\t" $ map (\f -> show $ f r) fields
where fields = [recNumber, recName]
But this doesn't work. Intuitively I would say fields has type Show a => [Rec -> a] and this should be ok. But Haskell doesn't allow it.
I'd like to understand what is going on here. Would I be right if I said that in the first case I get a list of functions such that the 2 instances of show are actually not the same function, but Haskell is able to determine which is which at compile time (which is why it's ok).
[show . recNumber, show . recName]
^-- This is show in instance Show Number
^-- This is show in instance Show String
Whereas in the second case, I only have one literal use of show in the code, and that would have to refer to multiple instances, not determined at compile time ?
map (\f -> show $ f r) fields
^-- Must be both instances at the same time
Can someone help me understand this ? And also are there workarounds or type system expansions that allow this ?
The type signature doesn't say what you think it says.
This seems to be a common misunderstanding. Consider the function
foo :: Show a => Rec -> a
People frequently seem to think this means that "foo can return any type that it wants to, so long as that type supports Show". It doesn't.
What it actually means is that foo must be able to return any possible type, because the caller gets to choose what the return type should be.
A few moments' thought will reveal that foo actually cannot exist. There is no way to turn a Rec into any possible type that can ever exist. It can't be done.
People often try to do something like Show a => [a] to mean "a list of mixed types but they all have Show". That obviously doesn't work; this type actually means that the list elements can be any type, but they still have to be all the same.
What you're trying to do seems reasonable enough. Unfortunately, I think your first example is about as close as you can get. You could try using tuples and lenses to get around this. You could try using Template Haskell instead. But unless you've got a hell of a lot of fields, it's probably not even worth the effort.
The type you actually want is not:
Show a => [Rec -> a]
Any type declaration with unbound type variables has an implicit forall. The above is equivalent to:
forall a. Show a => [Rec -> a]
This isn't what you wan't, because the a must be specialized to a single type for the entire list. (By the caller, to any one type they choose, as MathematicalOrchid points out.) Because you want the a of each element in the list to be able to be instantiated differently... what you are actually seeking is an existential type.
[exists a. Show a => Rec -> a]
You are wishing for a form of subtyping that Haskell does not support very well. The above syntax is not supported at all by GHC. You can use newtypes to sort of accomplish this:
{-# LANGUAGE ExistentialQuantification #-}
newtype Showy = forall a. Show a => Showy a
fields :: [Rec -> Showy]
fields = [Showy . recNumber, Showy . recName]
But unfortunatley, that is just as tedious as converting directly to strings, isn't it?
I don't believe that lens is capable of getting around this particular weakness of the Haskell type system:
recToString :: Rec -> String
recToString r = intercalate "\t" $ toListOf (each . to fieldShown) fields
where fields = (recNumber, recName)
fieldShown f = show (f r)
-- error: Couldn't match type Int with [Char]
Suppose the fields do have the same type:
fields = [recNumber, recNumber]
Then it works, and Haskell figures out which show function instance to use at compile time; it doesn't have to look it up dynamically.
If you manually write out show each time, as in your original example, then Haskell can determine the correct instance for each call to show at compile time.
As for existentials... it depends on implementation, but presumably, the compiler cannot determine which instance to use statically, so a dynamic lookup will be used instead.
I'd like to suggest something very simple instead:
recToString r = intercalate "\t" [s recNumber, s recName]
where s f = show (f r)
All the elements of a list in Haskell must have the same type, so a list containing one Int and one String simply cannot exist. It is possible to get around this in GHC using existential types, but you probably shouldn't (this use of existentials is widely considered an anti-pattern, and it doesn't tend to perform terribly well). Another option would be to switch from a list to a tuple, and use some weird stuff from the lens package to map over both parts. It might even work.

Why do we need monads?

In my humble opinion the answers to the famous question "What is a monad?", especially the most voted ones, try to explain what is a monad without clearly explaining why monads are really necessary. Can they be explained as the solution to a problem?
Why do we need monads?
We want to program only using functions. ("functional programming (FP)" after all).
Then, we have a first big problem. This is a program:
f(x) = 2 * x
g(x,y) = x / y
How can we say what is to be executed first? How can we form an ordered sequence of functions (i.e. a program) using no more than functions?
Solution: compose functions. If you want first g and then f, just write f(g(x,y)). This way, "the program" is a function as well: main = f(g(x,y)). OK, but ...
More problems: some functions might fail (i.e. g(2,0), divide by 0). We have no "exceptions" in FP (an exception is not a function). How do we solve it?
Solution: Let's allow functions to return two kind of things: instead of having g : Real,Real -> Real (function from two reals into a real), let's allow g : Real,Real -> Real | Nothing (function from two reals into (real or nothing)).
But functions should (to be simpler) return only one thing.
Solution: let's create a new type of data to be returned, a "boxing type" that encloses maybe a real or be simply nothing. Hence, we can have g : Real,Real -> Maybe Real. OK, but ...
What happens now to f(g(x,y))? f is not ready to consume a Maybe Real. And, we don't want to change every function we could connect with g to consume a Maybe Real.
Solution: let's have a special function to "connect"/"compose"/"link" functions. That way, we can, behind the scenes, adapt the output of one function to feed the following one.
In our case: g >>= f (connect/compose g to f). We want >>= to get g's output, inspect it and, in case it is Nothing just don't call f and return Nothing; or on the contrary, extract the boxed Real and feed f with it. (This algorithm is just the implementation of >>= for the Maybe type). Also note that >>= must be written only once per "boxing type" (different box, different adapting algorithm).
Many other problems arise which can be solved using this same pattern: 1. Use a "box" to codify/store different meanings/values, and have functions like g that return those "boxed values". 2. Have a composer/linker g >>= f to help connecting g's output to f's input, so we don't have to change any f at all.
Remarkable problems that can be solved using this technique are:
having a global state that every function in the sequence of functions ("the program") can share: solution StateMonad.
We don't like "impure functions": functions that yield different output for same input. Therefore, let's mark those functions, making them to return a tagged/boxed value: IO monad.
Total happiness!
The answer is, of course, "We don't". As with all abstractions, it isn't necessary.
Haskell does not need a monad abstraction. It isn't necessary for performing IO in a pure language. The IO type takes care of that just fine by itself. The existing monadic desugaring of do blocks could be replaced with desugaring to bindIO, returnIO, and failIO as defined in the GHC.Base module. (It's not a documented module on hackage, so I'll have to point at its source for documentation.) So no, there's no need for the monad abstraction.
So if it's not needed, why does it exist? Because it was found that many patterns of computation form monadic structures. Abstraction of a structure allows for writing code that works across all instances of that structure. To put it more concisely - code reuse.
In functional languages, the most powerful tool found for code reuse has been composition of functions. The good old (.) :: (b -> c) -> (a -> b) -> (a -> c) operator is exceedingly powerful. It makes it easy to write tiny functions and glue them together with minimal syntactic or semantic overhead.
But there are cases when the types don't work out quite right. What do you do when you have foo :: (b -> Maybe c) and bar :: (a -> Maybe b)? foo . bar doesn't typecheck, because b and Maybe b aren't the same type.
But... it's almost right. You just want a bit of leeway. You want to be able to treat Maybe b as if it were basically b. It's a poor idea to just flat-out treat them as the same type, though. That's more or less the same thing as null pointers, which Tony Hoare famously called the billion-dollar mistake. So if you can't treat them as the same type, maybe you can find a way to extend the composition mechanism (.) provides.
In that case, it's important to really examine the theory underlying (.). Fortunately, someone has already done this for us. It turns out that the combination of (.) and id form a mathematical construct known as a category. But there are other ways to form categories. A Kleisli category, for instance, allows the objects being composed to be augmented a bit. A Kleisli category for Maybe would consist of (.) :: (b -> Maybe c) -> (a -> Maybe b) -> (a -> Maybe c) and id :: a -> Maybe a. That is, the objects in the category augment the (->) with a Maybe, so (a -> b) becomes (a -> Maybe b).
And suddenly, we've extended the power of composition to things that the traditional (.) operation doesn't work on. This is a source of new abstraction power. Kleisli categories work with more types than just Maybe. They work with every type that can assemble a proper category, obeying the category laws.
Left identity: id . f = f
Right identity: f . id = f
Associativity: f . (g . h) = (f . g) . h
As long as you can prove that your type obeys those three laws, you can turn it into a Kleisli category. And what's the big deal about that? Well, it turns out that monads are exactly the same thing as Kleisli categories. Monad's return is the same as Kleisli id. Monad's (>>=) isn't identical to Kleisli (.), but it turns out to be very easy to write each in terms of the other. And the category laws are the same as the monad laws, when you translate them across the difference between (>>=) and (.).
So why go through all this bother? Why have a Monad abstraction in the language? As I alluded to above, it enables code reuse. It even enables code reuse along two different dimensions.
The first dimension of code reuse comes directly from the presence of the abstraction. You can write code that works across all instances of the abstraction. There's the entire monad-loops package consisting of loops that work with any instance of Monad.
The second dimension is indirect, but it follows from the existence of composition. When composition is easy, it's natural to write code in small, reusable chunks. This is the same way having the (.) operator for functions encourages writing small, reusable functions.
So why does the abstraction exist? Because it's proven to be a tool that enables more composition in code, resulting in creating reusable code and encouraging the creation of more reusable code. Code reuse is one of the holy grails of programming. The monad abstraction exists because it moves us a little bit towards that holy grail.
Benjamin Pierce said in TAPL
A type system can be regarded as calculating a kind of static
approximation to the run-time behaviours of the terms in a program.
That's why a language equipped with a powerful type system is strictly more expressive, than a poorly typed language. You can think about monads in the same way.
As #Carl and sigfpe point, you can equip a datatype with all operations you want without resorting to monads, typeclasses or whatever other abstract stuff. However monads allow you not only to write reusable code, but also to abstract away all redundant detailes.
As an example, let's say we want to filter a list. The simplest way is to use the filter function: filter (> 3) [1..10], which equals [4,5,6,7,8,9,10].
A slightly more complicated version of filter, that also passes an accumulator from left to right, is
swap (x, y) = (y, x)
(.*) = (.) . (.)
filterAccum :: (a -> b -> (Bool, a)) -> a -> [b] -> [b]
filterAccum f a xs = [x | (x, True) <- zip xs $ snd $ mapAccumL (swap .* f) a xs]
To get all i, such that i <= 10, sum [1..i] > 4, sum [1..i] < 25, we can write
filterAccum (\a x -> let a' = a + x in (a' > 4 && a' < 25, a')) 0 [1..10]
which equals [3,4,5,6].
Or we can redefine the nub function, that removes duplicate elements from a list, in terms of filterAccum:
nub' = filterAccum (\a x -> (x `notElem` a, x:a)) []
nub' [1,2,4,5,4,3,1,8,9,4] equals [1,2,4,5,3,8,9]. A list is passed as an accumulator here. The code works, because it's possible to leave the list monad, so the whole computation stays pure (notElem doesn't use >>= actually, but it could). However it's not possible to safely leave the IO monad (i.e. you cannot execute an IO action and return a pure value — the value always will be wrapped in the IO monad). Another example is mutable arrays: after you have leaved the ST monad, where a mutable array live, you cannot update the array in constant time anymore. So we need a monadic filtering from the Control.Monad module:
filterM :: (Monad m) => (a -> m Bool) -> [a] -> m [a]
filterM _ [] = return []
filterM p (x:xs) = do
flg <- p x
ys <- filterM p xs
return (if flg then x:ys else ys)
filterM executes a monadic action for all elements from a list, yielding elements, for which the monadic action returns True.
A filtering example with an array:
nub' xs = runST $ do
arr <- newArray (1, 9) True :: ST s (STUArray s Int Bool)
let p i = readArray arr i <* writeArray arr i False
filterM p xs
main = print $ nub' [1,2,4,5,4,3,1,8,9,4]
prints [1,2,4,5,3,8,9] as expected.
And a version with the IO monad, which asks what elements to return:
main = filterM p [1,2,4,5] >>= print where
p i = putStrLn ("return " ++ show i ++ "?") *> readLn
E.g.
return 1? -- output
True -- input
return 2?
False
return 4?
False
return 5?
True
[1,5] -- output
And as a final illustration, filterAccum can be defined in terms of filterM:
filterAccum f a xs = evalState (filterM (state . flip f) xs) a
with the StateT monad, that is used under the hood, being just an ordinary datatype.
This example illustrates, that monads not only allow you to abstract computational context and write clean reusable code (due to the composability of monads, as #Carl explains), but also to treat user-defined datatypes and built-in primitives uniformly.
I don't think IO should be seen as a particularly outstanding monad, but it's certainly one of the more astounding ones for beginners, so I'll use it for my explanation.
Naïvely building an IO system for Haskell
The simplest conceivable IO system for a purely-functional language (and in fact the one Haskell started out with) is this:
main₀ :: String -> String
main₀ _ = "Hello World"
With lazyness, that simple signature is enough to actually build interactive terminal programs – very limited, though. Most frustrating is that we can only output text. What if we added some more exciting output possibilities?
data Output = TxtOutput String
| Beep Frequency
main₁ :: String -> [Output]
main₁ _ = [ TxtOutput "Hello World"
-- , Beep 440 -- for debugging
]
cute, but of course a much more realistic “alterative output” would be writing to a file. But then you'd also want some way to read from files. Any chance?
Well, when we take our main₁ program and simply pipe a file to the process (using operating system facilities), we have essentially implemented file-reading. If we could trigger that file-reading from within the Haskell language...
readFile :: Filepath -> (String -> [Output]) -> [Output]
This would use an “interactive program” String->[Output], feed it a string obtained from a file, and yield a non-interactive program that simply executes the given one.
There's one problem here: we don't really have a notion of when the file is read. The [Output] list sure gives a nice order to the outputs, but we don't get an order for when the inputs will be done.
Solution: make input-events also items in the list of things to do.
data IO₀ = TxtOut String
| TxtIn (String -> [Output])
| FileWrite FilePath String
| FileRead FilePath (String -> [Output])
| Beep Double
main₂ :: String -> [IO₀]
main₂ _ = [ FileRead "/dev/null" $ \_ ->
[TxtOutput "Hello World"]
]
Ok, now you may spot an imbalance: you can read a file and make output dependent on it, but you can't use the file contents to decide to e.g. also read another file. Obvious solution: make the result of the input-events also something of type IO, not just Output. That sure includes simple text output, but also allows reading additional files etc..
data IO₁ = TxtOut String
| TxtIn (String -> [IO₁])
| FileWrite FilePath String
| FileRead FilePath (String -> [IO₁])
| Beep Double
main₃ :: String -> [IO₁]
main₃ _ = [ TxtIn $ \_ ->
[TxtOut "Hello World"]
]
That would now actually allow you to express any file operation you might want in a program (though perhaps not with good performance), but it's somewhat overcomplicated:
main₃ yields a whole list of actions. Why don't we simply use the signature :: IO₁, which has this as a special case?
The lists don't really give a reliable overview of program flow anymore: most subsequent computations will only be “announced” as the result of some input operation. So we might as well ditch the list structure, and simply cons a “and then do” to each output operation.
data IO₂ = TxtOut String IO₂
| TxtIn (String -> IO₂)
| Terminate
main₄ :: IO₂
main₄ = TxtIn $ \_ ->
TxtOut "Hello World"
Terminate
Not too bad!
So what has all of this to do with monads?
In practice, you wouldn't want to use plain constructors to define all your programs. There would need to be a good couple of such fundamental constructors, yet for most higher-level stuff we would like to write a function with some nice high-level signature. It turns out most of these would look quite similar: accept some kind of meaningfully-typed value, and yield an IO action as the result.
getTime :: (UTCTime -> IO₂) -> IO₂
randomRIO :: Random r => (r,r) -> (r -> IO₂) -> IO₂
findFile :: RegEx -> (Maybe FilePath -> IO₂) -> IO₂
There's evidently a pattern here, and we'd better write it as
type IO₃ a = (a -> IO₂) -> IO₂ -- If this reminds you of continuation-passing
-- style, you're right.
getTime :: IO₃ UTCTime
randomRIO :: Random r => (r,r) -> IO₃ r
findFile :: RegEx -> IO₃ (Maybe FilePath)
Now that starts to look familiar, but we're still only dealing with thinly-disguised plain functions under the hood, and that's risky: each “value-action” has the responsibility of actually passing on the resulting action of any contained function (else the control flow of the entire program is easily disrupted by one ill-behaved action in the middle). We'd better make that requirement explicit. Well, it turns out those are the monad laws, though I'm not sure we can really formulate them without the standard bind/join operators.
At any rate, we've now reached a formulation of IO that has a proper monad instance:
data IO₄ a = TxtOut String (IO₄ a)
| TxtIn (String -> IO₄ a)
| TerminateWith a
txtOut :: String -> IO₄ ()
txtOut s = TxtOut s $ TerminateWith ()
txtIn :: IO₄ String
txtIn = TxtIn $ TerminateWith
instance Functor IO₄ where
fmap f (TerminateWith a) = TerminateWith $ f a
fmap f (TxtIn g) = TxtIn $ fmap f . g
fmap f (TxtOut s c) = TxtOut s $ fmap f c
instance Applicative IO₄ where
pure = TerminateWith
(<*>) = ap
instance Monad IO₄ where
TerminateWith x >>= f = f x
TxtOut s c >>= f = TxtOut s $ c >>= f
TxtIn g >>= f = TxtIn $ (>>=f) . g
Obviously this is not an efficient implementation of IO, but it's in principle usable.
Monads serve basically to compose functions together in a chain. Period.
Now the way they compose differs across the existing monads, thus resulting in different behaviors (e.g., to simulate mutable state in the state monad).
The confusion about monads is that being so general, i.e., a mechanism to compose functions, they can be used for many things, thus leading people to believe that monads are about state, about IO, etc, when they are only about "composing functions".
Now, one interesting thing about monads, is that the result of the composition is always of type "M a", that is, a value inside an envelope tagged with "M". This feature happens to be really nice to implement, for example, a clear separation between pure from impure code: declare all impure actions as functions of type "IO a" and provide no function, when defining the IO monad, to take out the "a" value from inside the "IO a". The result is that no function can be pure and at the same time take out a value from an "IO a", because there is no way to take such value while staying pure (the function must be inside the "IO" monad to use such value). (NOTE: well, nothing is perfect, so the "IO straitjacket" can be broken using "unsafePerformIO : IO a -> a" thus polluting what was supposed to be a pure function, but this should be used very sparingly and when you really know to be not introducing any impure code with side-effects.
Monads are just a convenient framework for solving a class of recurring problems. First, monads must be functors (i.e. must support mapping without looking at the elements (or their type)), they must also bring a binding (or chaining) operation and a way to create a monadic value from an element type (return). Finally, bind and return must satisfy two equations (left and right identities), also called the monad laws. (Alternatively one could define monads to have a flattening operation instead of binding.)
The list monad is commonly used to deal with non-determinism. The bind operation selects one element of the list (intuitively all of them in parallel worlds), lets the programmer to do some computation with them, and then combines the results in all worlds to single list (by concatenating, or flattening, a nested list). Here is how one would define a permutation function in the monadic framework of Haskell:
perm [e] = [[e]]
perm l = do (leader, index) <- zip l [0 :: Int ..]
let shortened = take index l ++ drop (index + 1) l
trailer <- perm shortened
return (leader : trailer)
Here is an example repl session:
*Main> perm "a"
["a"]
*Main> perm "ab"
["ab","ba"]
*Main> perm ""
[]
*Main> perm "abc"
["abc","acb","bac","bca","cab","cba"]
It should be noted that the list monad is in no way a side effecting computation. A mathematical structure being a monad (i.e. conforming to the above mentioned interfaces and laws) does not imply side effects, though side-effecting phenomena often nicely fit into the monadic framework.
You need monads if you have a type constructor and functions that returns values of that type family. Eventually, you would like to combine these kind of functions together. These are the three key elements to answer why.
Let me elaborate. You have Int, String and Real and functions of type Int -> String, String -> Real and so on. You can combine these functions easily, ending with Int -> Real. Life is good.
Then, one day, you need to create a new family of types. It could be because you need to consider the possibility of returning no value (Maybe), returning an error (Either), multiple results (List) and so on.
Notice that Maybe is a type constructor. It takes a type, like Int and returns a new type Maybe Int. First thing to remember, no type constructor, no monad.
Of course, you want to use your type constructor in your code, and soon you end with functions like Int -> Maybe String and String -> Maybe Float. Now, you can't easily combine your functions. Life is not good anymore.
And here's when monads come to the rescue. They allow you to combine that kind of functions again. You just need to change the composition . for >==.
Why do we need monadic types?
Since it was the quandary of I/O and its observable effects in nonstrict languages like Haskell that brought the monadic interface to such prominence:
[...] monads are used to address the more general problem of computations (involving state, input/output, backtracking, ...) returning values: they do not solve any input/output-problems directly but rather provide an elegant and flexible abstraction of many solutions to related problems. [...] For instance, no less than three different input/output-schemes are used to solve these basic problems in Imperative functional programming, the paper which originally proposed `a new model, based on monads, for performing input/output in a non-strict, purely functional language'. [...]
[Such] input/output-schemes merely provide frameworks in which side-effecting operations can safely be used with a guaranteed order of execution and without affecting the properties of the purely functional parts of the language.
Claus Reinke (pages 96-97 of 210).
(emphasis by me.)
[...] When we write effectful code – monads or no monads – we have to constantly keep in mind the context of expressions we pass around.
The fact that monadic code ‘desugars’ (is implementable in terms of) side-effect-free code is irrelevant. When we use monadic notation, we program within that notation – without considering what this notation desugars into. Thinking of the desugared code breaks the monadic abstraction. A side-effect-free, applicative code is normally compiled to (that is, desugars into) C or machine code. If the desugaring argument has any force, it may be applied just as well to the applicative code, leading to the conclusion that it all boils down to the machine code and hence all programming is imperative.
[...] From the personal experience, I have noticed that the mistakes I make when writing monadic code are exactly the mistakes I made when programming in C. Actually, monadic mistakes tend to be worse, because monadic notation (compared to that of a typical imperative language) is ungainly and obscuring.
Oleg Kiselyov (page 21 of 26).
The most difficult construct for students to understand is the monad. I introduce IO without mentioning monads.
Olaf Chitil.
More generally:
Still, today, over 25 years after the introduction of the concept of monads to the world of functional programming, beginning functional programmers struggle to grasp the concept of monads. This struggle is exemplified by the numerous blog posts about the effort of trying to learn about monads. From our own experience we notice that even at university level, bachelor level students often struggle to comprehend monads and consistently score poorly on monad-related exam questions.
Considering that the concept of monads is not likely to disappear from the functional programming landscape any time soon, it is vital that we, as the functional programming community, somehow overcome the problems novices encounter when first studying monads.
Tim Steenvoorden, Jurriën Stutterheim, Erik Barendsen and Rinus Plasmeijer.
If only there was another way to specify "a guaranteed order of execution" in Haskell, while keeping the ability to separate regular Haskell definitions from those involved in I/O (and its observable effects) - translating this variation of Philip Wadler's echo:
val echoML : unit -> unit
fun echoML () = let val c = getcML () in
if c = #"\n" then
()
else
let val _ = putcML c in
echoML ()
end
fun putcML c = TextIO.output1(TextIO.stdOut,c);
fun getcML () = valOf(TextIO.input1(TextIO.stdIn));
...could then be as simple as:
echo :: OI -> ()
echo u = let !(u1:u2:u3:_) = partsOI u in
let !c = getChar u1 in
if c == '\n' then
()
else
let !_ = putChar c u2 in
echo u3
where:
data OI -- abstract
foreign import ccall "primPartOI" partOI :: OI -> (OI, OI)
⋮
foreign import ccall "primGetCharOI" getChar :: OI -> Char
foreign import ccall "primPutCharOI" putChar :: Char -> OI -> ()
⋮
and:
partsOI :: OI -> [OI]
partsOI u = let !(u1, u2) = partOI u in u1 : partsOI u2
How would this work? At run-time, Main.main receives an initial OI pseudo-data value as an argument:
module Main(main) where
main :: OI -> ()
⋮
...from which other OI values are produced, using partOI or partsOI. All you have to do is ensure each new OI value is used at most once, in each call to an OI-based definition, foreign or otherwise. In return, you get back a plain ordinary result - it isn't e.g. paired with some odd abstract state, or requires the use of a callback continuation, etc.
Using OI, instead of the unit type () like Standard ML does, means we can avoid always having to use the monadic interface:
Once you're in the IO monad, you're stuck there forever, and are reduced to Algol-style imperative programming.
Robert Harper.
But if you really do need it:
type IO a = OI -> a
unitIO :: a -> IO a
unitIO x = \ u -> let !_ = partOI u in x
bindIO :: IO a -> (a -> IO b) -> IO b
bindIO m k = \ u -> let !(u1, u2) = partOI u in
let !x = m u1 in
let !y = k x u2 in
y
⋮
So, monadic types aren't always needed - there are other interfaces out there:
LML had a fully fledged implementation of oracles running of a multi-processor (a Sequent Symmetry) back in ca 1989. The description in the Fudgets thesis refers to this implementation. It was fairly pleasant to work with and quite practical.
[...]
These days everything is done with monads so other solutions are sometimes forgotten.
Lennart Augustsson (2006).
Wait a moment: since it so closely resembles Standard ML's direct use of effects, is this approach and its use of pseudo-data referentially transparent?
Absolutely - just find a suitable definition of "referential transparency"; there's plenty to choose from...

Data constructors without breaking the open/closed principle

I have a data constructor like this
class FooClass a where
foo :: a -> b
class BarClass a where
bar :: a -> b
data FooBar = Foo :: FooClass a => a -> IO ()
| Bar :: BarClass a => a -> IO ()
So that I can use pattern matching:
foobar :: FooBar -> a -> IO ()
foobar (Foo f) x = f (foo x)
foobar (Bar f) x = f (bar x)
However, this breaks the open/closed principle.
I'd like to be able to extend FooBar with additional methods based on other classes.
How would I implement this in Haskell?
As others have pointed out, this code is flawed in ways that obscure your question. It's also probably dangerous to try to think too hard about how OO principles translate to FP. They have a place, because much of OO is embedded in FP naturally, but it's much better to learn FP directly first and then observe the laws later as certain special cases.
In particular, we can talk about how greater refinement of types is a form of extension. For instance, comparing the types like
(Num a) => a -> IO ()
(Num a, Show a) => a -> IO ()
we can talk about how the second function takes in a set of types which is a natural subtype of the inputs to the first function. In particular, the set of possible types that can be input to the second function is a refinement of the inputs to the first. As users of these functions, there are fewer valid ways to use the second function. As implementers of these functions, there are more valid ways to implement the second function. In fact, we know the following
All values which are valid inputs to the second function are also valid inputs to the first
All functions which are correctly typed by the first signature are also correctly typed by the second.
This duality between giving and taking is explored in the study of Game semantics. The idea of "open for extension" plays out trivially in that we can always decide to ask for a more refined type, but it's almost completely uninteresting since that's just obvious in how refined types are used.
So what about ADTs (data declarations) directly? Are then Open/Closed? Mu—ADTs aren't objects, so the rule does not apply directly.
The trick to doing your example in Haskell is to use functions instead of classes:
-- FooBar is like a base class
-- with methods foo and bar.
-- I've interpreted your example liberally
-- for purposes of illustration.
-- In particular, FooBar has two methods -
-- foo and bar - with different signatures.
data FooBar = FooBar {
foo :: IO (),
bar :: Int -> Int
}
-- Use functions for classes, like in Javascript.
-- This doesn't mean Haskell is untyped, it just means classes are not types.
-- Classes are really functions that make objects.
fooClass :: Int -> FooBar
fooClass n = FooBar {
foo = putStrLn ("Foo " ++ show n)
bar = \n -> n+1
}
barClass :: FooBar
barClass = FooBar {
foo = putStrLn "Bar ",
bar = \n -> n * 2
}
-- Now we can define a function that uses FooBar and it doesn't matter
-- if the FooBar we pass in came from fooClass, barClass or something else,
-- bazClass, say.
foobar (FooBar foo bar) = do
-- invoke foo
foo
-- use bar
print (bar 7)
Here FooBar is 'open for extension' because we can create as many FooBar values as we like with different behaviours.
To 'extend' FooBar with another field, baz, without changing FooBar, fooClass or barClass, we need to declare a FooBarBaz type that includes a FooBar. We can still use our foobar function, we just have to first extract the FooBar from the FooBarBaz first.
So far, I've been keeping close to OOP. This is because Bertrand Meyer worded the open closed principle to require OOP or something very like it:
software entities (classes, modules, functions, etc.) should be open
for extension, but closed for modification
In particular, the word "extension" is traditionally interpreted as meaning "subclassing". If you're prepared to interpret the principle as merely "having extension points", then any function that takes another function as parameter is "open for extension". This is so common in functional programming that it's not considered a principle. The "parameterisation principle" just doesn't sound the same.

Resources