Haskell code littered with TVar operations and functions taking many arguments: code smell? - haskell

I'm writing a MUD server in Haskell (MUD = Multi User Dungeon: basically, a multi-user text adventure/role-playing game). The game world data/state is represented in about 15 different IntMaps. My monad transformer stack looks like this: ReaderT MudData IO, where the MudData type is a record type containing the IntMaps, each in its own TVar (I'm using STM for concurrency):
data MudData = MudData { _armorTblTVar :: TVar (IntMap Armor)
, _clothingTblTVar :: TVar (IntMap Clothing)
, _coinsTblTVar :: TVar (IntMap Coins)
...and so on. (I'm using lenses, thus the underscores.)
Some functions need certain IntMaps, while other functions need others. Thus, having each IntMap in its own TVar provides granularity.
However, a pattern has emerged in my code. In the functions that handle player commands, I need to read (and sometimes later write) to my TVars within the STM monad. Thus these functions end up having an STM helper defined in their where blocks. These STM helpers often have quite a few readTVar operations in them, as most commands need to access a handful of the IntMaps. Furthermore, a function for a given command may call out to a number of pure helper functions that also need some or all of the IntMaps. These pure helper functions thus sometimes end up taking a lot of arguments (sometimes over 10).
So, my code has become "littered" with lots of readTVar expressions and functions that take a large number of arguments. Here are my questions: is this a code smell? Am I missing some abstraction that would make my code more elegant? Is there a more ideal way to structure my data/code?
Thanks!

The solution to this problem is in changing the pure helper functions. We don't really want them to be pure, we want to leak out a single side-effect - whether or not they read specific pieces of data.
Let's say we have a pure function that uses only clothing and coins:
moreVanityThanWealth :: IntMap Clothing -> IntMap Coins -> Bool
moreVanityThanWealth clothing coins = ...
It's usually nice to know that a function only cares about e.g. clothing and coins, but in your case this knowledge is irrelevant and is just creating headaches. We are going to deliberately forget this detail. If we followed mb14's suggestion, we would pass an entire pure MudData' like the following to the helper functions.
data MudData' = MudData' { _armorTbl :: IntMap Armor
, _clothingTbl :: IntMap Clothing
, _coinsTbl :: IntMap Coins
moreVanityThanWealth :: MudData' -> Bool
moreVanityThanWealth md =
let clothing = _clothingTbl md
coins = _coinsTbl md
in ...
MudData and MudData' are almost identical to each other. One of them wraps its fields in TVars and the other one doesn't. We can modify MudData so that it takes an extra type parameter (of kind * -> *) for what to wrap the fields in. MudData will have the slightly unusual kind (* -> *) -> *, which is closely related to lenses but doesn't have much library support. I call this pattern a Model.
data MudData f = MudData { _armorTbl :: f (IntMap Armor)
, _clothingTbl :: f (IntMap Clothing)
, _coinsTbl :: f (IntMap Coins)
We can recover the original MudData with MudData TVar. We can recreate the pure version by wrapping the fields in Identity, newtype Identity a = Identity {runIdentity :: a}. In terms of MudData Identity, our function would be written as
moreVanityThanWealth :: MudData Identity -> Bool
moreVanityThanWealth md =
let clothing = runIdentity . _clothingTbl $ md
coins = runIdentity . _coinsTbl $ md
in ...
We've successfully forgotten which parts of the MudData we've used, but now we don't have the lock granularity we want. We need to recover, as a side effect, exactly what we just forgot. If we wrote the STM version of the helper it would look like
moreVanityThanWealth :: MudData TVar -> STM Bool
moreVanityThanWealth md =
do
clothing <- readTVar . _clothingTbl $ md
coins <- readTVar . _coinsTbl $ md
return ...
This STM version for MudData TVar is almost exactly the same as the pure version we just wrote for MudData Identity. They only differ by the type of the reference (TVar vs. Identity), what function we use to get the values out of the references (readTVar vs runIdentity), and how the result is returned (in STM or as a plain value). It would be nice if the same function could be used to provide both. We are going to extract what is common between the two functions. To do so, we'll introduce a type class MonadReadRef r m for the Monads we can read some type of reference from. r is the type of the reference, readRef is the function to get the values out of the references, and m is how the result is returned. The following MonadReadRef is closely related to the MonadRef class from ref-fd.
{-# LANGUAGE FunctionalDependencies #-}
class Monad m => MonadReadRef r m | m -> r where
readRef :: r a -> m a
As long as code is parameterized over all MonadReadRef r ms, it is pure. We can see this by running it with the following instance of MonadReadRef for ordinary values held in an Identity. The id in readRef = id is the same as return . runIdentity.
instance MonadReadRef Identity Identity where
readRef = id
We'll rewrite moreVanityThanWealth in terms of MonadReadRef.
moreVanityThanWealth :: MonadReadRef r m => MudData r -> m Bool
moreVanityThanWealth md =
do
clothing <- readRef . _clothingTbl $ md
coins <- readRef . _coinsTbl $ md
return ...
When we add a MonadReadRef instance for TVars in STM, we can use these "pure" computations in STM but leak the side-effect of which TVars were read.
instance MonadReadRef TVar STM where
readRef = readTVar

Yes, this obviously makes your code complex and clutters the important code with a lot of boilerplate details. And functions with more than 4 arguments are a sign of problems.
I'd ask the question: Do you really gain anything by having separate TVars? Isn't it a case of premature optimization? Before taking such a design decision as splitting your data structure among multiple separate TVars, I'd definitely do some measurements (see criterion). You can create a sample test that models the expected number of concurrent threads and frequency of data updates and check what are you really gaining or losing by having multiple TVars vs a single one vs an IORef.
Keep in mind:
If there are multiple threads competing for common locks in a STM transaction, the transactions can get restarted several times before they manage to successfully complete. So under some circumstances, having multiple locks can actually make things worse.
If there is ultimately just one data structure that you need to synchronize, you might consider using a single IORef instead. It's atomic operations are very fast, which could compensate for having a single central lock.
In Haskell it's surprisingly difficult for a pure function to block an atomic STM or a IORef transaction for a long time. The reason is laziness: You only need to create thunks within such a transaction, not to evaluate them. This is true in particular for a single atomic IORef. The thunks are evaluated outside such transactions (by a thread that inspects them, or you can decide to force them at some point, if you need more control; this can be desired in your case, as if your system evolves without anybody observing it, you can easily accumulate unevaluated thunks).
If it turns out that having multiple TVars is indeed crucial, then I'd probably write all the code in a custom monad (as described by #Cirdec while I was writing my answer), whose implementation would be hidden from the main code, and which would provide functions for reading (and perhaps also writing) parts of the state. It'd then be run as a single STM transaction, reading and writing only what's needed, and you could have a pure version of the monad for testing.

Related

How to understand `MonadUnliftIO`'s requirement of "no stateful monads"?

I've looked over https://www.fpcomplete.com/blog/2017/06/tale-of-two-brackets, though skimming some parts, and I still don't quite understand the core issue "StateT is bad, IO is OK", other than vaguely getting the sense that Haskell allows one to write bad StateT monads (or in the ultimate example in the article, MonadBaseControl instead of StateT, I think).
In the haddocks, the following law must be satisfied:
askUnliftIO >>= (\u -> liftIO (unliftIO u m)) = m
So this appears to be saying that state is not mutated in the monad m when using askUnliftIO. But to my mind, in IO, the entire world can be the state. I could be reading and writing to a text file on disk, for instance.
To quote another article by Michael,
False purity We say WriterT and StateT are pure, and technically they
are. But let's be honest: if you have an application which is entirely
living within a StateT, you're not getting the benefits of restrained
mutation that you want from pure code. May as well call a spade a
spade, and accept that you have a mutable variable.
This makes me think this is indeed the case: with IO we are being honest, with StateT, we are not being honest about mutability ... but that seems another issue than what the law above is trying to show; after all, MonadUnliftIO is assuming IO. I'm having trouble understanding conceptually how IO is more restrictive than something else.
Update 1
After sleeping (some), I am still confused but am gradually getting less so as the day wears on. I worked out the law proof for IO. I realized the presence of id in the README. In particular,
instance MonadUnliftIO IO where
askUnliftIO = return (UnliftIO id)
So askUnliftIO would appear to return an IO (IO a) on an UnliftIO m.
Prelude> fooIO = print 5
Prelude> :t fooIO
fooIO :: IO ()
Prelude> let barIO :: IO(IO ()); barIO = return fooIO
Prelude> :t barIO
barIO :: IO (IO ())
Back to the law, it really appears to be saying that state is not mutated in the monad m when doing a round trip on the transformed monad (askUnliftIO), where the round trip is unLiftIO -> liftIO.
Resuming the example above, barIO :: IO (), so if we do barIO >>= (u -> liftIO (unliftIO u m)), then u :: IO () and unliftIO u == IO (), then liftIO (IO ()) == IO (). **So since everything has basically been applications of id under the hood, we can see that no state was changed, even though we are using IO. Crucially, I think, what is important is that the value in a is never run, nor is any other state modified, as a result of using askUnliftIO. If it did, then like in the case of randomIO :: IO a, we would not be able to get the same value had we not run askUnliftIO on it. (Verification attempt 1 below)
But, it still seems like we could do the same for other Monads, even if they do maintain state. But I also see how, for some monads, we may not be able to do so. Thinking of a contrived example: each time we access the value of type a contained in the stateful monad, some internal state is changed.
Verification attempt 1
> fooIO >> askUnliftIO
5
> fooIOunlift = fooIO >> askUnliftIO
> :t fooIOunlift
fooIOunlift :: IO (UnliftIO IO)
> fooIOunlift
5
Good so far, but confused about why the following occurs:
> fooIOunlift >>= (\u -> unliftIO u)
<interactive>:50:24: error:
* Couldn't match expected type `IO b'
with actual type `IO a0 -> IO a0'
* Probable cause: `unliftIO' is applied to too few arguments
In the expression: unliftIO u
In the second argument of `(>>=)', namely `(\ u -> unliftIO u)'
In the expression: fooIOunlift >>= (\ u -> unliftIO u)
* Relevant bindings include
it :: IO b (bound at <interactive>:50:1)
"StateT is bad, IO is OK"
That's not really the point of the article. The idea is that MonadBaseControl permits some confusing (and often undesirable) behaviors with stateful monad transformers in the presence of concurrency and exceptions.
finally :: StateT s IO a -> StateT s IO a -> StateT s IO a is a great example. If you use the "StateT is attaching a mutable variable of type s onto a monad m" metaphor, then you might expect that the finalizer action gets access to the most recent s value when an exception was thrown.
forkState :: StateT s IO a -> StateT s IO ThreadId is another one. You might expect that the state modifications from the input would be reflected in the original thread.
lol :: StateT Int IO [ThreadId]
lol = do
for [1..10] $ \i -> do
forkState $ modify (+i)
You might expect that lol could be rewritten (modulo performance) as modify (+ sum [1..10]). But that's not right. The implementation of forkState just passes the initial state to the forked thread, and then can never retrieve any state modifications. The easy/common understanding of StateT fails you here.
Instead, you have to adopt a more nuanced view of StateT s m a as "a transformer that provides a thread-local immutable variable of type s which is implicitly threaded through a computation, and it is possible to replace that local variable with a new value of the same type for future steps of the computation." (more or less a verbose english retelling of the s -> m (a, s)) With this understanding, the behavior of finally becomes a bit more clear: it's a local variable, so it does not survive exceptions. Likewise, forkState becomes more clear: it's a thread-local variable, so obviously a change to a different thread won't affect any others.
This is sometimes what you want. But it's usually not how people write code IRL and it often confuses people.
For a long time, the default choice in the ecosystem to do this "lowering" operation was MonadBaseControl, and this had a bunch of downsides: hella confusing types, difficult to implement instances, impossible to derive instances, sometimes confusing behavior. Not a great situation.
MonadUnliftIO restricts things to a simpler set of monad transformers, and is able to provide relatively simple types, derivable instances, and always predictable behavior. The cost is that ExceptT, StateT, etc transformers can't use it.
The underlying principle is: by restricting what is possible, we make it easier to understand what might happen. MonadBaseControl is extremely powerful and general, and quite difficult to use and confusing as a result. MonadUnliftIO is less powerful and general, but it's much easier to use.
So this appears to be saying that state is not mutated in the monad m when using askUnliftIO.
This isn't true - the law is stating that unliftIO shouldn't do anything with the monad transformer aside from lowering it into IO. Here's something that breaks that law:
newtype WithInt a = WithInt (ReaderT Int IO a)
deriving newtype (Functor, Applicative, Monad, MonadIO, MonadReader Int)
instance MonadUnliftIO WithInt where
askUnliftIO = pure (UnliftIO (\(WithInt readerAction) -> runReaderT 0 readerAction))
Let's verify that this breaks the law given: askUnliftIO >>= (\u -> liftIO (unliftIO u m)) = m.
test :: WithInt Int
test = do
int <- ask
print int
pure int
checkLaw :: WithInt ()
checkLaw = do
first <- test
second <- askUnliftIO >>= (\u -> liftIO (unliftIO u test))
when (first /= second) $
putStrLn "Law violation!!!"
The value returned by test and the askUnliftIO ... lowering/lifting are different, so the law is broken. Furthermore, the observed effects are different, which isn't great either.

How to modify parts of a State in Haskell

I have a number of operations which modify a System. System is defined like this:
data System = Sys {
sysId :: Int,
sysRand :: StdGen,
sysProcesses :: ProcessDb,
sysItems :: ItemDb
}
with e.g.
type ProcessDb = M.Map Int Process
But I also have some functions, which do not need access to the full System, but have types like this:
foo' :: (Process, ItemDb) -> ((Process, ItemDb),[Event])
Currently I gave them types like
foo: System -> (System, [Event])
But this is a needlessly broad interface. To use the narrow interface above in conjuntion with System I would have to extract a single Process and the ItemDb from System, run foo' and then modify System with the results.
This is quite some unwrapping and wrapping and results in more lines of code than just passing system as a whole and let foo extract whatever it needs. In the latter case, the wrapping and unwrapping is mingled with the actual foo' operation and I have the feeling that these two aspects should be separated.
I suppose I need some kind of lifting operation which turns a narrow foo' into a foo. I suppose I could write this, but I would have to write such a lifter for every signature of the narrow functions, resulting is lots of different lifters.
is there an idiom how to solve such problems?
is it worth bothering?
One common solution is to use a class, possibly created by the Template Haskell magic of Control.Lens.TH.makeClassy. The gist is that you pass in the whole System, but you don't let the function know that that's what you're giving it. All it's allowed to know is that what you're giving it offers methods for getting and/or modifying the pieces it's supposed to handle.
I ended up writing a function which work on any State and which requires a "Lens" which captures the specfic transformation from the bigger State to the smaller State and back
focus :: (Lens s' s) -> State s' a -> State s a
focus lens ms'= do
s <- get
let (s', set) = lens s
(a, s'') = runState ms' s'
put (set s'')
return a
It allows me to write things like
run :: ExitP -> State SimState Log
...
do
evqs' <-focus onSys $ step (t,evt)
...
Where step operates on the "smaller" state
step :: Timed Event -> State Sys.System [EventQu]
Here onSys is a "Lens" and it works like this:
onSys :: Lens Sys.System SimState
onSys (Sis e s) = (s, Sis e)
where
data SimState = Sis {
events :: EventQu,
sisSys :: Sys.System
I suppose the existing Lens libraries follow a similar approach, but do much more magic, like creating lenses automatically. I did shy away from lenses. Instead I was pleased to realise that all it takes was a few lines of codes to get what I need.

Why do we need monads?

In my humble opinion the answers to the famous question "What is a monad?", especially the most voted ones, try to explain what is a monad without clearly explaining why monads are really necessary. Can they be explained as the solution to a problem?
Why do we need monads?
We want to program only using functions. ("functional programming (FP)" after all).
Then, we have a first big problem. This is a program:
f(x) = 2 * x
g(x,y) = x / y
How can we say what is to be executed first? How can we form an ordered sequence of functions (i.e. a program) using no more than functions?
Solution: compose functions. If you want first g and then f, just write f(g(x,y)). This way, "the program" is a function as well: main = f(g(x,y)). OK, but ...
More problems: some functions might fail (i.e. g(2,0), divide by 0). We have no "exceptions" in FP (an exception is not a function). How do we solve it?
Solution: Let's allow functions to return two kind of things: instead of having g : Real,Real -> Real (function from two reals into a real), let's allow g : Real,Real -> Real | Nothing (function from two reals into (real or nothing)).
But functions should (to be simpler) return only one thing.
Solution: let's create a new type of data to be returned, a "boxing type" that encloses maybe a real or be simply nothing. Hence, we can have g : Real,Real -> Maybe Real. OK, but ...
What happens now to f(g(x,y))? f is not ready to consume a Maybe Real. And, we don't want to change every function we could connect with g to consume a Maybe Real.
Solution: let's have a special function to "connect"/"compose"/"link" functions. That way, we can, behind the scenes, adapt the output of one function to feed the following one.
In our case: g >>= f (connect/compose g to f). We want >>= to get g's output, inspect it and, in case it is Nothing just don't call f and return Nothing; or on the contrary, extract the boxed Real and feed f with it. (This algorithm is just the implementation of >>= for the Maybe type). Also note that >>= must be written only once per "boxing type" (different box, different adapting algorithm).
Many other problems arise which can be solved using this same pattern: 1. Use a "box" to codify/store different meanings/values, and have functions like g that return those "boxed values". 2. Have a composer/linker g >>= f to help connecting g's output to f's input, so we don't have to change any f at all.
Remarkable problems that can be solved using this technique are:
having a global state that every function in the sequence of functions ("the program") can share: solution StateMonad.
We don't like "impure functions": functions that yield different output for same input. Therefore, let's mark those functions, making them to return a tagged/boxed value: IO monad.
Total happiness!
The answer is, of course, "We don't". As with all abstractions, it isn't necessary.
Haskell does not need a monad abstraction. It isn't necessary for performing IO in a pure language. The IO type takes care of that just fine by itself. The existing monadic desugaring of do blocks could be replaced with desugaring to bindIO, returnIO, and failIO as defined in the GHC.Base module. (It's not a documented module on hackage, so I'll have to point at its source for documentation.) So no, there's no need for the monad abstraction.
So if it's not needed, why does it exist? Because it was found that many patterns of computation form monadic structures. Abstraction of a structure allows for writing code that works across all instances of that structure. To put it more concisely - code reuse.
In functional languages, the most powerful tool found for code reuse has been composition of functions. The good old (.) :: (b -> c) -> (a -> b) -> (a -> c) operator is exceedingly powerful. It makes it easy to write tiny functions and glue them together with minimal syntactic or semantic overhead.
But there are cases when the types don't work out quite right. What do you do when you have foo :: (b -> Maybe c) and bar :: (a -> Maybe b)? foo . bar doesn't typecheck, because b and Maybe b aren't the same type.
But... it's almost right. You just want a bit of leeway. You want to be able to treat Maybe b as if it were basically b. It's a poor idea to just flat-out treat them as the same type, though. That's more or less the same thing as null pointers, which Tony Hoare famously called the billion-dollar mistake. So if you can't treat them as the same type, maybe you can find a way to extend the composition mechanism (.) provides.
In that case, it's important to really examine the theory underlying (.). Fortunately, someone has already done this for us. It turns out that the combination of (.) and id form a mathematical construct known as a category. But there are other ways to form categories. A Kleisli category, for instance, allows the objects being composed to be augmented a bit. A Kleisli category for Maybe would consist of (.) :: (b -> Maybe c) -> (a -> Maybe b) -> (a -> Maybe c) and id :: a -> Maybe a. That is, the objects in the category augment the (->) with a Maybe, so (a -> b) becomes (a -> Maybe b).
And suddenly, we've extended the power of composition to things that the traditional (.) operation doesn't work on. This is a source of new abstraction power. Kleisli categories work with more types than just Maybe. They work with every type that can assemble a proper category, obeying the category laws.
Left identity: id . f = f
Right identity: f . id = f
Associativity: f . (g . h) = (f . g) . h
As long as you can prove that your type obeys those three laws, you can turn it into a Kleisli category. And what's the big deal about that? Well, it turns out that monads are exactly the same thing as Kleisli categories. Monad's return is the same as Kleisli id. Monad's (>>=) isn't identical to Kleisli (.), but it turns out to be very easy to write each in terms of the other. And the category laws are the same as the monad laws, when you translate them across the difference between (>>=) and (.).
So why go through all this bother? Why have a Monad abstraction in the language? As I alluded to above, it enables code reuse. It even enables code reuse along two different dimensions.
The first dimension of code reuse comes directly from the presence of the abstraction. You can write code that works across all instances of the abstraction. There's the entire monad-loops package consisting of loops that work with any instance of Monad.
The second dimension is indirect, but it follows from the existence of composition. When composition is easy, it's natural to write code in small, reusable chunks. This is the same way having the (.) operator for functions encourages writing small, reusable functions.
So why does the abstraction exist? Because it's proven to be a tool that enables more composition in code, resulting in creating reusable code and encouraging the creation of more reusable code. Code reuse is one of the holy grails of programming. The monad abstraction exists because it moves us a little bit towards that holy grail.
Benjamin Pierce said in TAPL
A type system can be regarded as calculating a kind of static
approximation to the run-time behaviours of the terms in a program.
That's why a language equipped with a powerful type system is strictly more expressive, than a poorly typed language. You can think about monads in the same way.
As #Carl and sigfpe point, you can equip a datatype with all operations you want without resorting to monads, typeclasses or whatever other abstract stuff. However monads allow you not only to write reusable code, but also to abstract away all redundant detailes.
As an example, let's say we want to filter a list. The simplest way is to use the filter function: filter (> 3) [1..10], which equals [4,5,6,7,8,9,10].
A slightly more complicated version of filter, that also passes an accumulator from left to right, is
swap (x, y) = (y, x)
(.*) = (.) . (.)
filterAccum :: (a -> b -> (Bool, a)) -> a -> [b] -> [b]
filterAccum f a xs = [x | (x, True) <- zip xs $ snd $ mapAccumL (swap .* f) a xs]
To get all i, such that i <= 10, sum [1..i] > 4, sum [1..i] < 25, we can write
filterAccum (\a x -> let a' = a + x in (a' > 4 && a' < 25, a')) 0 [1..10]
which equals [3,4,5,6].
Or we can redefine the nub function, that removes duplicate elements from a list, in terms of filterAccum:
nub' = filterAccum (\a x -> (x `notElem` a, x:a)) []
nub' [1,2,4,5,4,3,1,8,9,4] equals [1,2,4,5,3,8,9]. A list is passed as an accumulator here. The code works, because it's possible to leave the list monad, so the whole computation stays pure (notElem doesn't use >>= actually, but it could). However it's not possible to safely leave the IO monad (i.e. you cannot execute an IO action and return a pure value — the value always will be wrapped in the IO monad). Another example is mutable arrays: after you have leaved the ST monad, where a mutable array live, you cannot update the array in constant time anymore. So we need a monadic filtering from the Control.Monad module:
filterM :: (Monad m) => (a -> m Bool) -> [a] -> m [a]
filterM _ [] = return []
filterM p (x:xs) = do
flg <- p x
ys <- filterM p xs
return (if flg then x:ys else ys)
filterM executes a monadic action for all elements from a list, yielding elements, for which the monadic action returns True.
A filtering example with an array:
nub' xs = runST $ do
arr <- newArray (1, 9) True :: ST s (STUArray s Int Bool)
let p i = readArray arr i <* writeArray arr i False
filterM p xs
main = print $ nub' [1,2,4,5,4,3,1,8,9,4]
prints [1,2,4,5,3,8,9] as expected.
And a version with the IO monad, which asks what elements to return:
main = filterM p [1,2,4,5] >>= print where
p i = putStrLn ("return " ++ show i ++ "?") *> readLn
E.g.
return 1? -- output
True -- input
return 2?
False
return 4?
False
return 5?
True
[1,5] -- output
And as a final illustration, filterAccum can be defined in terms of filterM:
filterAccum f a xs = evalState (filterM (state . flip f) xs) a
with the StateT monad, that is used under the hood, being just an ordinary datatype.
This example illustrates, that monads not only allow you to abstract computational context and write clean reusable code (due to the composability of monads, as #Carl explains), but also to treat user-defined datatypes and built-in primitives uniformly.
I don't think IO should be seen as a particularly outstanding monad, but it's certainly one of the more astounding ones for beginners, so I'll use it for my explanation.
Naïvely building an IO system for Haskell
The simplest conceivable IO system for a purely-functional language (and in fact the one Haskell started out with) is this:
main₀ :: String -> String
main₀ _ = "Hello World"
With lazyness, that simple signature is enough to actually build interactive terminal programs – very limited, though. Most frustrating is that we can only output text. What if we added some more exciting output possibilities?
data Output = TxtOutput String
| Beep Frequency
main₁ :: String -> [Output]
main₁ _ = [ TxtOutput "Hello World"
-- , Beep 440 -- for debugging
]
cute, but of course a much more realistic “alterative output” would be writing to a file. But then you'd also want some way to read from files. Any chance?
Well, when we take our main₁ program and simply pipe a file to the process (using operating system facilities), we have essentially implemented file-reading. If we could trigger that file-reading from within the Haskell language...
readFile :: Filepath -> (String -> [Output]) -> [Output]
This would use an “interactive program” String->[Output], feed it a string obtained from a file, and yield a non-interactive program that simply executes the given one.
There's one problem here: we don't really have a notion of when the file is read. The [Output] list sure gives a nice order to the outputs, but we don't get an order for when the inputs will be done.
Solution: make input-events also items in the list of things to do.
data IO₀ = TxtOut String
| TxtIn (String -> [Output])
| FileWrite FilePath String
| FileRead FilePath (String -> [Output])
| Beep Double
main₂ :: String -> [IO₀]
main₂ _ = [ FileRead "/dev/null" $ \_ ->
[TxtOutput "Hello World"]
]
Ok, now you may spot an imbalance: you can read a file and make output dependent on it, but you can't use the file contents to decide to e.g. also read another file. Obvious solution: make the result of the input-events also something of type IO, not just Output. That sure includes simple text output, but also allows reading additional files etc..
data IO₁ = TxtOut String
| TxtIn (String -> [IO₁])
| FileWrite FilePath String
| FileRead FilePath (String -> [IO₁])
| Beep Double
main₃ :: String -> [IO₁]
main₃ _ = [ TxtIn $ \_ ->
[TxtOut "Hello World"]
]
That would now actually allow you to express any file operation you might want in a program (though perhaps not with good performance), but it's somewhat overcomplicated:
main₃ yields a whole list of actions. Why don't we simply use the signature :: IO₁, which has this as a special case?
The lists don't really give a reliable overview of program flow anymore: most subsequent computations will only be “announced” as the result of some input operation. So we might as well ditch the list structure, and simply cons a “and then do” to each output operation.
data IO₂ = TxtOut String IO₂
| TxtIn (String -> IO₂)
| Terminate
main₄ :: IO₂
main₄ = TxtIn $ \_ ->
TxtOut "Hello World"
Terminate
Not too bad!
So what has all of this to do with monads?
In practice, you wouldn't want to use plain constructors to define all your programs. There would need to be a good couple of such fundamental constructors, yet for most higher-level stuff we would like to write a function with some nice high-level signature. It turns out most of these would look quite similar: accept some kind of meaningfully-typed value, and yield an IO action as the result.
getTime :: (UTCTime -> IO₂) -> IO₂
randomRIO :: Random r => (r,r) -> (r -> IO₂) -> IO₂
findFile :: RegEx -> (Maybe FilePath -> IO₂) -> IO₂
There's evidently a pattern here, and we'd better write it as
type IO₃ a = (a -> IO₂) -> IO₂ -- If this reminds you of continuation-passing
-- style, you're right.
getTime :: IO₃ UTCTime
randomRIO :: Random r => (r,r) -> IO₃ r
findFile :: RegEx -> IO₃ (Maybe FilePath)
Now that starts to look familiar, but we're still only dealing with thinly-disguised plain functions under the hood, and that's risky: each “value-action” has the responsibility of actually passing on the resulting action of any contained function (else the control flow of the entire program is easily disrupted by one ill-behaved action in the middle). We'd better make that requirement explicit. Well, it turns out those are the monad laws, though I'm not sure we can really formulate them without the standard bind/join operators.
At any rate, we've now reached a formulation of IO that has a proper monad instance:
data IO₄ a = TxtOut String (IO₄ a)
| TxtIn (String -> IO₄ a)
| TerminateWith a
txtOut :: String -> IO₄ ()
txtOut s = TxtOut s $ TerminateWith ()
txtIn :: IO₄ String
txtIn = TxtIn $ TerminateWith
instance Functor IO₄ where
fmap f (TerminateWith a) = TerminateWith $ f a
fmap f (TxtIn g) = TxtIn $ fmap f . g
fmap f (TxtOut s c) = TxtOut s $ fmap f c
instance Applicative IO₄ where
pure = TerminateWith
(<*>) = ap
instance Monad IO₄ where
TerminateWith x >>= f = f x
TxtOut s c >>= f = TxtOut s $ c >>= f
TxtIn g >>= f = TxtIn $ (>>=f) . g
Obviously this is not an efficient implementation of IO, but it's in principle usable.
Monads serve basically to compose functions together in a chain. Period.
Now the way they compose differs across the existing monads, thus resulting in different behaviors (e.g., to simulate mutable state in the state monad).
The confusion about monads is that being so general, i.e., a mechanism to compose functions, they can be used for many things, thus leading people to believe that monads are about state, about IO, etc, when they are only about "composing functions".
Now, one interesting thing about monads, is that the result of the composition is always of type "M a", that is, a value inside an envelope tagged with "M". This feature happens to be really nice to implement, for example, a clear separation between pure from impure code: declare all impure actions as functions of type "IO a" and provide no function, when defining the IO monad, to take out the "a" value from inside the "IO a". The result is that no function can be pure and at the same time take out a value from an "IO a", because there is no way to take such value while staying pure (the function must be inside the "IO" monad to use such value). (NOTE: well, nothing is perfect, so the "IO straitjacket" can be broken using "unsafePerformIO : IO a -> a" thus polluting what was supposed to be a pure function, but this should be used very sparingly and when you really know to be not introducing any impure code with side-effects.
Monads are just a convenient framework for solving a class of recurring problems. First, monads must be functors (i.e. must support mapping without looking at the elements (or their type)), they must also bring a binding (or chaining) operation and a way to create a monadic value from an element type (return). Finally, bind and return must satisfy two equations (left and right identities), also called the monad laws. (Alternatively one could define monads to have a flattening operation instead of binding.)
The list monad is commonly used to deal with non-determinism. The bind operation selects one element of the list (intuitively all of them in parallel worlds), lets the programmer to do some computation with them, and then combines the results in all worlds to single list (by concatenating, or flattening, a nested list). Here is how one would define a permutation function in the monadic framework of Haskell:
perm [e] = [[e]]
perm l = do (leader, index) <- zip l [0 :: Int ..]
let shortened = take index l ++ drop (index + 1) l
trailer <- perm shortened
return (leader : trailer)
Here is an example repl session:
*Main> perm "a"
["a"]
*Main> perm "ab"
["ab","ba"]
*Main> perm ""
[]
*Main> perm "abc"
["abc","acb","bac","bca","cab","cba"]
It should be noted that the list monad is in no way a side effecting computation. A mathematical structure being a monad (i.e. conforming to the above mentioned interfaces and laws) does not imply side effects, though side-effecting phenomena often nicely fit into the monadic framework.
You need monads if you have a type constructor and functions that returns values of that type family. Eventually, you would like to combine these kind of functions together. These are the three key elements to answer why.
Let me elaborate. You have Int, String and Real and functions of type Int -> String, String -> Real and so on. You can combine these functions easily, ending with Int -> Real. Life is good.
Then, one day, you need to create a new family of types. It could be because you need to consider the possibility of returning no value (Maybe), returning an error (Either), multiple results (List) and so on.
Notice that Maybe is a type constructor. It takes a type, like Int and returns a new type Maybe Int. First thing to remember, no type constructor, no monad.
Of course, you want to use your type constructor in your code, and soon you end with functions like Int -> Maybe String and String -> Maybe Float. Now, you can't easily combine your functions. Life is not good anymore.
And here's when monads come to the rescue. They allow you to combine that kind of functions again. You just need to change the composition . for >==.
Why do we need monadic types?
Since it was the quandary of I/O and its observable effects in nonstrict languages like Haskell that brought the monadic interface to such prominence:
[...] monads are used to address the more general problem of computations (involving state, input/output, backtracking, ...) returning values: they do not solve any input/output-problems directly but rather provide an elegant and flexible abstraction of many solutions to related problems. [...] For instance, no less than three different input/output-schemes are used to solve these basic problems in Imperative functional programming, the paper which originally proposed `a new model, based on monads, for performing input/output in a non-strict, purely functional language'. [...]
[Such] input/output-schemes merely provide frameworks in which side-effecting operations can safely be used with a guaranteed order of execution and without affecting the properties of the purely functional parts of the language.
Claus Reinke (pages 96-97 of 210).
(emphasis by me.)
[...] When we write effectful code – monads or no monads – we have to constantly keep in mind the context of expressions we pass around.
The fact that monadic code ‘desugars’ (is implementable in terms of) side-effect-free code is irrelevant. When we use monadic notation, we program within that notation – without considering what this notation desugars into. Thinking of the desugared code breaks the monadic abstraction. A side-effect-free, applicative code is normally compiled to (that is, desugars into) C or machine code. If the desugaring argument has any force, it may be applied just as well to the applicative code, leading to the conclusion that it all boils down to the machine code and hence all programming is imperative.
[...] From the personal experience, I have noticed that the mistakes I make when writing monadic code are exactly the mistakes I made when programming in C. Actually, monadic mistakes tend to be worse, because monadic notation (compared to that of a typical imperative language) is ungainly and obscuring.
Oleg Kiselyov (page 21 of 26).
The most difficult construct for students to understand is the monad. I introduce IO without mentioning monads.
Olaf Chitil.
More generally:
Still, today, over 25 years after the introduction of the concept of monads to the world of functional programming, beginning functional programmers struggle to grasp the concept of monads. This struggle is exemplified by the numerous blog posts about the effort of trying to learn about monads. From our own experience we notice that even at university level, bachelor level students often struggle to comprehend monads and consistently score poorly on monad-related exam questions.
Considering that the concept of monads is not likely to disappear from the functional programming landscape any time soon, it is vital that we, as the functional programming community, somehow overcome the problems novices encounter when first studying monads.
Tim Steenvoorden, Jurriën Stutterheim, Erik Barendsen and Rinus Plasmeijer.
If only there was another way to specify "a guaranteed order of execution" in Haskell, while keeping the ability to separate regular Haskell definitions from those involved in I/O (and its observable effects) - translating this variation of Philip Wadler's echo:
val echoML : unit -> unit
fun echoML () = let val c = getcML () in
if c = #"\n" then
()
else
let val _ = putcML c in
echoML ()
end
fun putcML c = TextIO.output1(TextIO.stdOut,c);
fun getcML () = valOf(TextIO.input1(TextIO.stdIn));
...could then be as simple as:
echo :: OI -> ()
echo u = let !(u1:u2:u3:_) = partsOI u in
let !c = getChar u1 in
if c == '\n' then
()
else
let !_ = putChar c u2 in
echo u3
where:
data OI -- abstract
foreign import ccall "primPartOI" partOI :: OI -> (OI, OI)
⋮
foreign import ccall "primGetCharOI" getChar :: OI -> Char
foreign import ccall "primPutCharOI" putChar :: Char -> OI -> ()
⋮
and:
partsOI :: OI -> [OI]
partsOI u = let !(u1, u2) = partOI u in u1 : partsOI u2
How would this work? At run-time, Main.main receives an initial OI pseudo-data value as an argument:
module Main(main) where
main :: OI -> ()
⋮
...from which other OI values are produced, using partOI or partsOI. All you have to do is ensure each new OI value is used at most once, in each call to an OI-based definition, foreign or otherwise. In return, you get back a plain ordinary result - it isn't e.g. paired with some odd abstract state, or requires the use of a callback continuation, etc.
Using OI, instead of the unit type () like Standard ML does, means we can avoid always having to use the monadic interface:
Once you're in the IO monad, you're stuck there forever, and are reduced to Algol-style imperative programming.
Robert Harper.
But if you really do need it:
type IO a = OI -> a
unitIO :: a -> IO a
unitIO x = \ u -> let !_ = partOI u in x
bindIO :: IO a -> (a -> IO b) -> IO b
bindIO m k = \ u -> let !(u1, u2) = partOI u in
let !x = m u1 in
let !y = k x u2 in
y
⋮
So, monadic types aren't always needed - there are other interfaces out there:
LML had a fully fledged implementation of oracles running of a multi-processor (a Sequent Symmetry) back in ca 1989. The description in the Fudgets thesis refers to this implementation. It was fairly pleasant to work with and quite practical.
[...]
These days everything is done with monads so other solutions are sometimes forgotten.
Lennart Augustsson (2006).
Wait a moment: since it so closely resembles Standard ML's direct use of effects, is this approach and its use of pseudo-data referentially transparent?
Absolutely - just find a suitable definition of "referential transparency"; there's plenty to choose from...

How practical is it to embed the core of a language with an effectful function space (like ML) into Haskell?

As Moggi proposed 20 years ago, the effectful function space -> of languages like ML can be decomposed into the standard total function space => plus a strong monad T to capture effects.
A -> B decomposes to A => (T B)
Now, Haskell supports monads, including an IO monad that appears sufficient for the effects in ML, and it has a function space that contains => (but also includes partial functions). So, we should be able to translate a considerable fragment of ML into Haskell via this decomposition. In theory I think this works.
My question is whether an embedding like this can be practical: is it possible to design a Haskell library that allows programming in Haskell in a style not too far from ML? And if so how will the performance be?
My criteria for "practical" is that existing ML code with extensive use of effects could be relatively easily transcribed into Haskell via the embedding, including complicated cases involving higher-order functions.
To make this concrete, my own attempt at such a transcription via the embedding is below. The main function is a transcription of some simple ML code that imperatively generates 5 distinct variable names. Rather than use the decomposition directly, my version lifts functions so that they evaluate their arguments - the definitions prior to main are a mini-library including lifted primitives. This works okay, but some aspects aren't totally satisfactory.
There's a little too much syntactic noise for the injection of values into computations via val. Having unlifted versions of functions (like rdV) would help this, at the cost of requiring these to be defined.
Non-value definitions like varNum require monadic binding via <- in a do. This then forces any definitions that depend on them to also be in the same do expression.
It seems then that the whole program might end up being in one huge do expression. This is how ML programs are often considered, but in Haskell it's not quite as well supported - e.g., you're forced to use case instead of equations.
I guess there will be some laziness despite threading the IO monad throughout. Given that the ML program would be designed for strict evaluation, the laziness should probably be removed. I'm uncertain what the best way to do this is though.
So, any advice on improving this, or on better approaches using the same decomposition, or even quite different ways of achieving the same broad goal of programming in Haskell using a style that mirrors ML?
(It's not that I dislike the style of Haskell, it's just that I'd like to be able to map existing ML code easily.)
import Data.IORef
import Control.Monad
val :: Monad m => a -> m a
val = return
ref = join . liftM newIORef
rdV = readIORef -- Unlifted, hence takes a value
(!=) r x = do { rr <- r; xx <- x; writeIORef rr xx }
(.+),(.-) :: IO Int -> IO Int -> IO Int
( (.+),(.-) ) = ( liftM2(+), liftM2(-) )
(.:) :: IO a -> IO [a] -> IO [a]
(.:) = liftM2(:)
showIO :: Show a => IO a -> IO String
showIO = liftM show
main = do
varNum <- ref (val 0)
let newVar = (=<<) $ \() -> val varNum != (rdV varNum .+ val 1) >>
val 'v' .: (showIO (rdV varNum))
let gen = (=<<) $ \n -> case n of 0 -> return []
nn -> (newVar $ val ()) .: (gen (val n .- val 1))
gen (val 5)
Here's a possible way, by sigfpe. It doesn't cover lambdas, but it seems it can be extended to them.

Why are side-effects modeled as monads in Haskell?

Could anyone give some pointers on why the impure computations in Haskell are modelled as monads?
I mean monad is just an interface with 4 operations, so what was the reasoning to modelling side-effects in it?
Suppose a function has side effects. If we take all the effects it produces as the input and output parameters, then the function is pure to the outside world.
So, for an impure function
f' :: Int -> Int
we add the RealWorld to the consideration
f :: Int -> RealWorld -> (Int, RealWorld)
-- input some states of the whole world,
-- modify the whole world because of the side effects,
-- then return the new world.
then f is pure again. We define a parametrized data type type IO a = RealWorld -> (a, RealWorld), so we don't need to type RealWorld so many times, and can just write
f :: Int -> IO Int
To the programmer, handling a RealWorld directly is too dangerous—in particular, if a programmer gets their hands on a value of type RealWorld, they might try to copy it, which is basically impossible. (Think of trying to copy the entire filesystem, for example. Where would you put it?) Therefore, our definition of IO encapsulates the states of the whole world as well.
Composition of "impure" functions
These impure functions are useless if we can't chain them together. Consider
getLine :: IO String ~ RealWorld -> (String, RealWorld)
getContents :: String -> IO String ~ String -> RealWorld -> (String, RealWorld)
putStrLn :: String -> IO () ~ String -> RealWorld -> ((), RealWorld)
We want to
get a filename from the console,
read that file, and
print that file's contents to the console.
How would we do it if we could access the real world states?
printFile :: RealWorld -> ((), RealWorld)
printFile world0 = let (filename, world1) = getLine world0
(contents, world2) = (getContents filename) world1
in (putStrLn contents) world2 -- results in ((), world3)
We see a pattern here. The functions are called like this:
...
(<result-of-f>, worldY) = f worldX
(<result-of-g>, worldZ) = g <result-of-f> worldY
...
So we could define an operator ~~~ to bind them:
(~~~) :: (IO b) -> (b -> IO c) -> IO c
(~~~) :: (RealWorld -> (b, RealWorld))
-> (b -> RealWorld -> (c, RealWorld))
-> (RealWorld -> (c, RealWorld))
(f ~~~ g) worldX = let (resF, worldY) = f worldX
in g resF worldY
then we could simply write
printFile = getLine ~~~ getContents ~~~ putStrLn
without touching the real world.
"Impurification"
Now suppose we want to make the file content uppercase as well. Uppercasing is a pure function
upperCase :: String -> String
But to make it into the real world, it has to return an IO String. It is easy to lift such a function:
impureUpperCase :: String -> RealWorld -> (String, RealWorld)
impureUpperCase str world = (upperCase str, world)
This can be generalized:
impurify :: a -> IO a
impurify :: a -> RealWorld -> (a, RealWorld)
impurify a world = (a, world)
so that impureUpperCase = impurify . upperCase, and we can write
printUpperCaseFile =
getLine ~~~ getContents ~~~ (impurify . upperCase) ~~~ putStrLn
(Note: Normally we write getLine ~~~ getContents ~~~ (putStrLn . upperCase))
We were working with monads all along
Now let's see what we've done:
We defined an operator (~~~) :: IO b -> (b -> IO c) -> IO c which chains two impure functions together
We defined a function impurify :: a -> IO a which converts a pure value to impure.
Now we make the identification (>>=) = (~~~) and return = impurify, and see? We've got a monad.
Technical note
To ensure it's really a monad, there's still a few axioms which need to be checked too:
return a >>= f = f a
impurify a = (\world -> (a, world))
(impurify a ~~~ f) worldX = let (resF, worldY) = (\world -> (a, world )) worldX
in f resF worldY
= let (resF, worldY) = (a, worldX)
in f resF worldY
= f a worldX
f >>= return = f
(f ~~~ impurify) worldX = let (resF, worldY) = f worldX
in impurify resF worldY
= let (resF, worldY) = f worldX
in (resF, worldY)
= f worldX
f >>= (\x -> g x >>= h) = (f >>= g) >>= h
Left as exercise.
Could anyone give some pointers on why the unpure computations in Haskell are modeled as monads?
This question contains a widespread misunderstanding.
Impurity and Monad are independent notions.
Impurity is not modeled by Monad.
Rather, there are a few data types, such as IO, that represent imperative computation.
And for some of those types, a tiny fraction of their interface corresponds to the interface pattern called "Monad".
Moreover, there is no known pure/functional/denotative explanation of IO (and there is unlikely to be one, considering the "sin bin" purpose of IO), though there is the commonly told story about World -> (a, World) being the meaning of IO a.
That story cannot truthfully describe IO, because IO supports concurrency and nondeterminism.
The story doesn't even work when for deterministic computations that allow mid-computation interaction with the world.
For more explanation, see this answer.
Edit: On re-reading the question, I don't think my answer is quite on track.
Models of imperative computation do often turn out to be monads, just as the question said.
The asker might not really assume that monadness in any way enables the modeling of imperative computation.
As I understand it, someone called Eugenio Moggi first noticed that a previously obscure mathematical construct called a "monad" could be used to model side effects in computer languages, and hence specify their semantics using Lambda calculus. When Haskell was being developed there were various ways in which impure computations were modelled (see Simon Peyton Jones' "hair shirt" paper for more details), but when Phil Wadler introduced monads it rapidly became obvious that this was The Answer. And the rest is history.
Could anyone give some pointers on why the unpure computations in Haskell are modeled as monads?
Well, because Haskell is pure. You need a mathematical concept to distinguish between unpure computations and pure ones on type-level and to model programm flows in respectively.
This means you'll have to end up with some type IO a that models an unpure computation. Then you need to know ways of combining these computations of which apply in sequence (>>=) and lift a value (return) are the most obvious and basic ones.
With these two, you've already defined a monad (without even thinking of it);)
In addition, monads provide very general and powerful abstractions, so many kinds of control flow can be conveniently generalized in monadic functions like sequence, liftM or special syntax, making unpureness not such a special case.
See monads in functional programming and uniqueness typing (the only alternative I know) for more information.
As you say, Monad is a very simple structure. One half of the answer is: Monad is the simplest structure that we could possibly give to side-effecting functions and be able to use them. With Monad we can do two things: we can treat a pure value as a side-effecting value (return), and we can apply a side-effecting function to a side-effecting value to get a new side-effecting value (>>=). Losing the ability to do either of these things would be crippling, so our side-effecting type needs to be "at least" Monad, and it turns out Monad is enough to implement everything we've needed to so far.
The other half is: what's the most detailed structure we could give to "possible side effects"? We can certainly think about the space of all possible side effects as a set (the only operation that requires is membership). We can combine two side effects by doing them one after another, and this will give rise to a different side effect (or possibly the same one - if the first was "shutdown computer" and the second was "write file", then the result of composing these is just "shutdown computer").
Ok, so what can we say about this operation? It's associative; that is, if we combine three side effects, it doesn't matter which order we do the combining in. If we do (write file then read socket) then shutdown computer, it's the same as doing write file then (read socket then shutdown computer). But it's not commutative: ("write file" then "delete file") is a different side effect from ("delete file" then "write file"). And we have an identity: the special side effect "no side effects" works ("no side effects" then "delete file" is the same side effect as just "delete file") At this point any mathematician is thinking "Group!" But groups have inverses, and there's no way to invert a side effect in general; "delete file" is irreversible. So the structure we have left is that of a monoid, which means our side-effecting functions should be monads.
Is there a more complex structure? Sure! We could divide possible side effects into filesystem-based effects, network-based effects and more, and we could come up with more elaborate rules of composition that preserved these details. But again it comes down to: Monad is very simple, and yet powerful enough to express most of the properties we care about. (In particular, associativity and the other axioms let us test our application in small pieces, with confidence that the side effects of the combined application will be the same as the combination of the side effects of the pieces).
It's actually quite a clean way to think of I/O in a functional way.
In most programming languages, you do input/output operations. In Haskell, imagine writing code not to do the operations, but to generate a list of the operations that you would like to do.
Monads are just pretty syntax for exactly that.
If you want to know why monads as opposed to something else, I guess the answer is that they're the best functional way to represent I/O that people could think of when they were making Haskell.
AFAIK, the reason is to be able to include side effects checks in the type system. If you want to know more, listen to those SE-Radio episodes:
Episode 108: Simon Peyton Jones on Functional Programming and Haskell
Episode 72: Erik Meijer on LINQ
Above there are very good detailed answers with theoretical background. But I want to give my view on IO monad. I am not experienced haskell programmer, so May be it is quite naive or even wrong. But i helped me to deal with IO monad to some extent (note, that it do not relates to other monads).
First I want to say, that example with "real world" is not too clear for me as we cannot access its (real world) previous states. May be it do not relates to monad computations at all but it is desired in the sense of referential transparency, which is generally presents in haskell code.
So we want our language (haskell) to be pure. But we need input/output operations as without them our program cannot be useful. And those operations cannot be pure by their nature. So the only way to deal with this we have to separate impure operations from the rest of code.
Here monad comes. Actually, I am not sure, that there cannot exist other construct with similar needed properties, but the point is that monad have these properties, so it can be used (and it is used successfully). The main property is that we cannot escape from it. Monad interface do not have operations to get rid of the monad around our value. Other (not IO) monads provide such operations and allow pattern matching (e.g. Maybe), but those operations are not in monad interface. Another required property is ability to chain operations.
If we think about what we need in terms of type system, we come to the fact that we need type with constructor, which can be wrapped around any vale. Constructor must be private, as we prohibit escaping from it(i.e. pattern matching). But we need function to put value into this constructor (here return comes to mind). And we need the way to chain operations. If we think about it for some time, we will come to the fact, that chaining operation must have type as >>= has. So, we come to something very similar to monad. I think, if we now analyze possible contradictory situations with this construct, we will come to monad axioms.
Note, that developed construct do not have anything in common with impurity. It only have properties, which we wished to have to be able to deal with impure operations, namely, no-escaping, chaining, and a way to get in.
Now some set of impure operations is predefined by the language within this selected monad IO. We can combine those operations to create new unpure operations. And all those operations will have to have IO in their type. Note however, that presence of IO in type of some function do not make this function impure. But as I understand, it is bad idea to write pure functions with IO in their type, as it was initially our idea to separate pure and impure functions.
Finally, I want to say, that monad do not turn impure operations into pure ones. It only allows to separate them effectively. (I repeat, that it is only my understanding)

Resources