I often find this pattern in Haskell code:
options :: MVar OptionRecord
options = unsafePerformIO $ newEmptyMVar
...
doSomething :: Foo -> Bar
doSomething = unsafePerformIO $ do
opt <- readMVar options
doSomething' where ...
Basically, one has a record of options or something similar, that is initially set at the program's beginning. As the programmer is lazy, he doesn't want to carry the options record all over the program. He defines an MVar to keep it - defined by an ugly use of unsafePerformIO. The programmer ensures, that the state is set only once and before any operation has taken place. Now each part of the program has to use unsafePerformIO again, just to extract the options.
In my opinion, such a variable is considered pragmatically pure (don't beat me). Is there a library that abstracts this concept away and ensures that the variable is set only once, i.e. that no call is done before that initialization and that one doesn't have to write unsafeFireZeMissilesAndMakeYourCodeUglyAnd DisgustingBecauseOfThisLongFunctionName
Those who would trade essential referential transparency for a little
temporary convenience deserve neither
purity nor convenience.
This is a bad idea. The code that you're finding this in is bad code.*
There's no way to fully wrap this pattern up safely, because it is not a safe pattern. Do not do this in your code. Do not look for a safe way to do this. There is not a safe way to do this. Put the unsafePerformIO down on the floor, slowly, and back away from the console...
*There are legitimate reasons that people do use top level MVars, but those reasons have to do with bindings to foreign code for the most part, or a few other things where the alternative is very messy. In those instances, as far as I know, however, the top level MVars are not accessed from behind unsafePerformIO.
If you are using MVar for holding settings or something similar, why don't you try reader monad?
foo :: ReaderT OptionRecord IO ()
foo = do
options <- ask
fireMissiles
main = runReaderT foo (OptionRecord "foo")
(And regular Reader if you don't require IO :P)
Use implicit parameters. They're slightly less heavyweight than making every function have Reader or ReaderT in its type. You do have to change the type signatures of your functions, but I think such a change can be scripted. (Would make a nice feature for a Haskell IDE.)
There is an important reason for not using this pattern. As far as I know, in
options :: MVar OptionRecord
options = unsafePerformIO $ newEmptyMVar
Haskell gives no guarantees that options will be evaluated only once. Since the result of option is a pure value, it can be memoized and reused, but it can also be recomputed for every call (i.e. inlined) and the meaning of the program must not change (contrary to your case).
If you still decide to use this pattern, be sure to add {-# NOINLINE options #-}, otherwise it might get inlined and your program will fail! (And by this we're getting out of the guarantees given by the language and the type system and relying solely on the implementation of a particular compiler.)
This topic has been widely discussed and possible solutions are nicely summarized on Haskell Wiki in Top level mutable state. Currently it's not possible to safely abstract this pattern without some additional compiler support.
I often find this pattern in Haskell code:
Read different code.
As the programmer is lazy, he doesn't want to carry the options record all over the program. He defines an MVar to keep it - defined by an ugly use of unsafePerformIO. The programmer ensures, that the state is set only once and before any operation has taken place. Now each part of the program has to use unsafePerformIO again, just to extract the options.
Sounds like literally exactly what the reader monad accomplishes, except that the reader monad does it in a safe way. Instead of accommodating your own laziness, just write actual good code.
Related
Let's say I have multiple threads that are reading from a file and I want to make sure that only a single thread is reading from the file at any point in time.
One way to implement this is to use an mvar :: MVar () and ensure mutual exclusion as follows:
thread = do
...
_ <- takeMVar mvar
x <- readFile "somefile" -- critical section
putMVar mvar ()
...
-- do something that evaluates x.
The above should work fine in strict languages, but unless I'm missing something, I might run into problems with this approach in Haskell. In particular, since x is evaluated only after the thread exits the critical section, it seems to me that the file will only be read after the thread has executed putMVar, which defeats the point of using MVars in the first place, as multiple threads may read the file at the same time.
Is the problem that I'm describing real and, if so, how do I get around it?
Yes, it's real. You get around it by avoiding all the base functions that are implemented using unsafeInterleaveIO. I don't have a complete list, but that's at least readFile, getContents, hGetContents. IO actions that don't do lazy IO -- like hGet or hGetLine -- are fine.
If you must use lazy IO, then fully evaluate its results in an IO action inside the critical section, e.g. by combining rnf and evaluate.
Some other commentary on related things, but that aren't directly answers to this question:
Laziness and lazy IO are really separate concepts. They happen to share a name because humans are lazy at naming. Most IO actions do not involve lazy IO and do not run into this problem.
There is a related problem about stuffing unevaluated pure computations into your MVar and accidentally evaluating it on a different thread than you were expecting, but if you avoid lazy IO then evaluating on the wrong thread is merely a performance bug rather than an actual semantics bug.
readFile should be named unsafeReadFile because it's unsafe in the same way as unsafeInterleaveIO. If you stay away from functions that have, or should have, the unsafe prefix then you won't have this problem.
Haskell isn't a lazily evaluated language. It's language in which, as in mathematics, evaluation order doesn't matter (except that you mustn't spend an unbounded amount of time trying to evaluate a function's argument before evaluating the function body). Compilers are free to reorder computations for efficiency reasons, and GHC does, so programs compiled with GHC aren't lazily evaluated as a rule.
readFile (along with getContents and hGetContents) is one of a small number of standard Haskell functions without the unsafe prefix that violate Haskell's value semantics. GHC has to specially disable its optimizations when it encounters such functions because they make program transformations observable that aren't supposed to be observable.
These functions are convenient hacks that can make some toy programs easier to write. You shouldn't use them in threaded code, or, in my opinion, at all. I think they shouldn't even be used in introductory programming courses (which is probably what they were meant for) because they give beginners a totally wrong impression of how evaluation in Haskell is supposed to work.
To add IO functions to a programming language interpreter written in Haskell, I have basically two options:
Modify the entire interpreter to run inside the IO monad
Have the runtime functions that can be invoked by interpreted programs use unsafePerformIO.
The former feels like a bad idea to me -- this effectively negates any purity benefits by having IO reach practically everywhere in the program. I also currently use ST heavily, and would have to modify large quantities of the program to achieve this, as there is no way I can see to use both ST and IO at the same time (?).
The latter makes me nervous -- as the function name states, it is unsafe, but I think in this situation it may be justified. Particularly:
The amount of code touched by this change would be very small.
The points at which IO may be performed are explicitly sequenced already by the use of seq at control points during evaluation of interpreted expressions.
Perhaps more importantly, values returned by IO actions would only be used within interpreted sections of code, where I can guarantee referential transparency by the fact that the interpreter cannot be called multiple times with the same arguments, as an operation counter will be threaded through the entire system as part of the same change, and is always passed with a unique value to every function that would use unsafePerformIO.
In this circumstance, is there a good reason not to use unsafePerformIO?
Update
I was asked why I want to retain purity in the interpreter. There are a number of reasons, but perhaps the most pressing is that I intend to later build a compiler for this language, and the language will include a variety of metaprogramming techniques that will require the compiler to include the interpreter, but I want to be able to guarantee purity of the results of compilation. The language will have a pure subset for this purpose, and I would like the interpreter to be pure when executing that subset.
If I understand it correctly, you want to add IO actions to interpreted language (impure primops), while the interpreter itself is pure.
The first option is abstract primops from interpreter. For example, the interpreter could run in some unspecified monad, while priops are injected:
data Primops m = Primops
{ putChar :: Char -> m ()
, getChar :: m Char
, ...
}
interpret :: Monad m => Primops m -> Program -> m ()
Now interpreter can't perform any IO action except the closed list of primops. (You can achieve similar result using custom monad instead of passing primops as an argument.)
But I'd consider it over-engineering until you say exactly why you need pure interpreter. Probably you don't? If you just want to make pure parts of the interpreter easy to test, then it is probably better to extract those parts into separate pure functions. That way the top level entry point will be impure, but small, yet all the interpreter's logic will be testable.
Given a Haskell value (edit per Rein Heinrich's comment) f:
f :: IO Int
f = ... -- ignoring its implementation
Quoting "Type-Driven Development with Idris,"
The key property of a pure function is that the same inputs always produce the same result. This property is known as referential transparency
Is f, and, namely all IO ... functions in Haskell, pure? It seems to me that they are not since, lookInDatabase :: IO DBThing, won't always return the same value since:
at t=0, the DB might be down
at t=1, the DB might be up and return MyDbThing would result
In short, is f (and IO ... functions in general) pure? If yes, then please correct my incorrect understanding given my attempt to disprove the functional purity of f with my t=... examples.
IO is really a separate language, conceptually. It's the language of the Haskell RTS (runtime system). It's implemented in Haskell as a (relatively simple) embedded DSL whose "scripts" have the type IO a.
So Haskell functions that return values of type IO a, are actually not the functions that are being executed at runtime — what gets executed is the IO a value itself. So these functions actually are pure but their return values represent non-pure computations.
From a language design point of view, IO is a really elegant hack to keep the non-pure ugliness completely isolated away while at the same integrating it tightly into its pure surroundings, without resorting to special casing. In other words, the design does not solve the problems caused by impure IO but it does a great job of at least not affecting the pure parts of your code.
The next step would be to look into FRP — with FRP you can make the layer that contains IO even thinner and move even more of non-pure logic into pure logic.
You might also want to read John Backus' writings on the topic of Function Programming, the limitations of the Von Neumann architecture etc. Conal Elliott is also a name to google if you're interested in the relationship between purity and IO.
P.S. also worth noting is that while IO is heavily reliant on monads to work around an aspect of lazy evaluation, and because monads are a very nice way of structuring embedded DSLs (of which IO is just a single example), monads are much more general than IO, so try not to think about IO and monads in the same context too much — they are two separate things and both could exist without the other.
First of all, you're right in noticing that I/O actions are not pure. That's impossible. But, purity in all functions is one of Haskell's promising points, so what's happening?
Whether you like it or not, a function that applies into a (may also be incorrectly said "returns a") IO Something with some arguments will always return the same IO Something with the same arguments. The IO monad allows you to "hide" actions inside of the container the monad acts like. When you have a IO String, that function/object does not contain a String/[Char], but rather sort of a promise that you'll get that String somehow in the future. Thus, IO contains information of what to do when the impure I/O action needs to be performed.
After all, the only way for an IO action to be performed is by it having the name main, or be a dependency of main thereof. Because of the flexibility of monads, you can "concatenate" IO actions. A program like this... (note: this code is not a good idea)
main = do
input <- getLine
putStrLn input
Is syntatic sugar for...
main =
getLine >>= (\input -> putStrLn input)
That would state that main is the I/O action resulting from printing to standard output a string read from standard input, followed by a newline character. Did you saw the magic? IO is just a wrapper representing what to do, in an impure context, to produce some given output, but not the result of that operation, because that would need the Haskell language to admit impure code.
Think of it as sort of a receipe. If you have a receipe (read: IO monad) for a cake (read: Something in IO Something), you know how to make the cake, but you can't make the cake (because you could screw that masterpiece). Instead, the master chief (read: the most basic parts of the Haskell runtime system, responsible for applying main) does the dirty work for you (read: doing impure/illegal stuff), and, the best of all, he won't commit any mistakes (read: breaking code purity)... unless the oven breaks of course (read: System.IO.Error), but he knows how to clean that up (read: code will always remain pure).
This is one of the reasons that IO is an opaque type. It's implementation is somewhat controversial (until you read GHC's source code), and is better of to be left as implementation-defined.
Just be happy, because you've been illuminated by purity. A lot of programmers don't even know of Haskell's existence!
I hope this has led some light on you!
Haskell is pulling a trick here. IO both is and isn't pure, depending on how you look at it.
On the "IO is pure" side, you're fallen into the very common error of thinking of a function returning an IO DBThing as of it were returning a DBThing. When someone claims that a function with type Stuff -> IO DBThing is pure they are not saying that you can feed it the same Stuff and always get the same DBThing; as you correctly note that is impossible, and also not very useful! What they're saving is that given particular Stuff you'll always get back the same IO DBThing.
You actually can't get a DBThing out of an IO DBThing at all, so Haskell don't ever have to worry about the database containing different values (or being unavailable) at different times. All you can do with an IO DBThing is combine it with something else that needs a DBThing and produces some other kind of IO thing; the result of such a combination is an IO thing.
What Haskell is doing here is building up a correspondence between manipulation of pure Haskell values and changes that would happen out in the world outside the program. There are things you can do with some ordinary pure values that don't make any sense with impure operations like altering the state of a database. So using the correspondence between IO values and the outside world, Haskell simply doesn't provide you with any operations on IO values that would correspond to things that don't make sense in the real world.
There are several ways to explain how you're "purely" manipulating the real world. One is to say that IO is just like a state monad, only the state being threaded through is the entire world outside your program;= (so your Stuff -> IO DBThing function really has an extra hidden argument that receives the world, and actually returns a DBThing along with another world; it's always called with different worlds, and that's why it can return different DBThing values even when called with the same Stuff). Another explanation is that an IO DBThing value is itself an imperative program; your Haskell program is a totally pure function doing no IO, which returns an impure program that does IO, and the Haskell runtime system (impurely) executes the program it returns.
But really these are both simply metaphors. The point is that the IO value simply has a very limited interface which doesn't allow you to do anything that doesn't make sense as a real world action.
Note that the concept of monad hasn't actually come into this. Haskell's IO system really doesn't depend on monads; Monad is just a convenient interface which is sufficiently limited that if you're only using the generic monad interface you also can't break the IO limitations (even if you don't know your monad is actually IO). Since the Monad interface is also interesting enough to write a lot of useful programs, the fact that IO forms a monad allows a lot of code that's useful on other types to be generically reused on IO.
Does this mean you actually get to write pure IO code? Not really. This is the "of course IO isn't pure" side of the coin. When you're using the fancy "combining IO functions together" you still have to think about your program executing steps one after the other (or in parallel), affecting and being affected by outside conditions and systems; in short exactly the same kind of reasoning you have to use to write IO code in an imperative language (only with a nicer type system than most of them). Making IO pure doesn't really help you banish impurity from the way you have to think about your code.
So what's the point? Well for one, it gets us a compiler-enforced demarcation of code that can do IO and code that can't. If there's no IO tag on the type then impure IO isn't involved. That would be useful in any language just on its own. And the compiler knows this too; Haskell compilers can apply optimizations to non-IO code that would be invalid in most other languages because it's often impossible to know that a given section of code doesn't have side effects (unless you can see the full implementation of everything the code calls, transitively).
Also, because IO is pure, code analysis tools (including your brain) don't have to treat IO-code specially. If you can pick out a code transformation that would be valid on pure code with the same structure as the IO code, you can do it on the IO code. Compilers make use of this. Many transformations are ruled out by the structure that IO code must use (in order to stay within the bounds of things that have a sensible correspondence to things in the outside world) but they would also be ruled out by any pure code that used the same structure; the careful construction of the IO interface makes "execution order dependency" look like ordinary "data dependency", so you can just use the rules of data dependency to determine the rules of using IO.
Short answer: Yes, that f is referential transparent.
Whenever you look at it, it equals the same value.
But that doesn't mean it will always bind the same value.
In short, is f (and IO ... functions in general) pure?
So what you're really asking is:
Are IO definitions in Haskell pure?
You're really not going to like it.
Deep Thought.
It depends on what you mean by "pure".
From section 6.1.7 (page 75) of the Haskell 2010 report:
The IO type serves as a tag for operations (actions) that interact with the outside world. The IO type is abstract: no constructors are visible to the user. IO is an instance of the Monad and Functor classes.
the crucial point being:
The IO type is abstract
If Haskell's FFI was sufficiently-enhanced, IO could be as simple as:
data IO a -- a tag type: no visible constructors
instance Monad IO where
return = unitIO
(>>=) = bindIO
foreign import ccall "primUnitIO" unitIO :: a -> IO a
foreign import ccall "primBindIO" bindIO :: IO a -> (a -> IO b) -> IO b
⋮
No Haskell definitions whatsoever: all I/O-related activity is performed by calls to foreign code, usually written in the same language as the Haskell implementation. A variation of this approach
is used in Agda:
4 Compiling Agda programs
This section deals with the topic of getting Agda programs to interact
with the real world. Type checking Agda programs requires evaluating
arbitrary terms, ans as long as all terms are pure and normalizing this is
not a problem, but what happens when we introduce side effects? Clearly,
we don't want side effects to happen at compile time. Another question is
what primitives the language should provide for constructing side effecting
programs. In Agda, these problems are solved by allowing arbitrary
Haskell functions to be imported as axioms. At compile time, these imported
functions have no reduction behaviour, only at run time is the
Haskell function executed.
(emphasis by me.)
By moving the problem of I/O outside of Haskell or Agda, questions of "purity" are now a matter for that other language (or languages!).
Given these circumstances, there can be no "standard definition" for IO, so there's no common way to determine such a property for that type, let alone any of its expressions. We can't even provide a simple proof that IO is monadic (i.e. it satisfies the monad laws) as return and (>>=) simply cannot be defined in standard Haskell 2010.
To get some idea on how this affects the determining of various IO-related properties, see:
Semantics of fixIO by Levent Erkok, John Launchbury and Andrew Moran.
Tackling the Awkward Squad: … by Simon Peyton Jones (starting from section 3.2 on page 20).
So when you next hear or read about Haskell being "referentially transparent" or "purely functional", you now know that (at least for I/O) they're just conjectures - no actual standard definition means there's no way to prove or disprove them.
(If you're now wondering how Haskell got into this state, I provide some more details here.)
I'm sure it's not, but I've received the type IO FileOffset from System.Posix functions, and I can't figure out what I can do with it. It seems like it's just a rename of type COFF, which seems to be just a wrapper for Int64, and in fact when I get it in GHCI, I can see the number that the IO FileOffset corresponds to. However, I can't add it to anything else, print it out (except for through the interpreter), or even convert it to another type. It seems to be immune to show.
How can I actually use this type? I'm new to Haskell so I'm sure I'm missing something fundamental about types and possibly the documentation.
As discussed in numerous other questions, like this, there is never anything you can do with an IO a value as such – except bind it in another IO computation, which eventually has to be invoked from either main or ghci. And this is not some stupid arbitrary restriction of Haskell, but reflects the fact that something like a file offset can impossibly be known without the program first going “out into the world”, doing the file operation, coming back with the result. In impure languages, this kind of thing just suddenly happens when you try to evaluate an IO “function”, but only because half a century of imperative programming has done it this way doesn't mean it's a good idea. In fact it's a cause for quite a lot of bugs in non-purely functional languages, but more importantly it makes it way harder to understand what some library function will actually do – in Haskell you only need to look at the signature, and when there's no IO in it, you can be utterly sure1 it won't do any!
Question remains: how do you actually get any “real” work done? Well, it's pretty clever. For beginners, it's probably helpful to keep to this guideline:
An IO action always needs to be evaluated in a do block.
To retrieve results from such actions, use the val <- action syntax. This can stand anywhere in a do block except at the end. It is equivalent to what procedural languages write as var val = action() or similar. If action had a type IO T, then val will have simply type T!
The value obtained this way can be used anywhere in the same do block below the line you've obtained it from. Quite like in procedural languages.
So if your action was, say,
findOffsetOfFirstChar :: Handle -> Char -> IO FileOffset
you can use it like this:
printOffsetOfQ :: Handle -> IO ()
printOffsetOfQ h = do
offset <- findOffsetOfFirstChar h 'Q'
print offset
Later on you'll learn that many of these dos arent really necessary, but for the time being it's probably easiest to use them everywhere where there's IO going on.
1Some people will now object that there is a thing called unsafePerformIO which allows you to do IO without the signature telling so, but apart from being, well, unsafe, this does not actually belong to the Haskell language but to its foreign function interface.
My question is whether monads in Haskell actually maintain Haskell's purity, and if so how. Frequently I have read about how side effects are impure but that side effects are needed for useful programs (e.g. I/O). In the next sentence it is stated that Haskell's solution to this is monads. Then monads are explained to some degree or another, but not really how they solve the side-effect problem.
I have seen this and this, and my interpretation of the answers is actually one that came to me in my own readings -- the "actions" of the IO monad are not the I/O themselves but objects that, when executed, perform I/O. But it occurs to me that one could make the same argument for any code or perhaps any compiled executable. Couldn't you say that a C++ program only produces side effects when the compiled code is executed? That all of C++ is inside the IO monad and so C++ is pure? I doubt this is true, but I honestly don't know in what way it is not. In fact, didn't Moggi (sp?) initially use monads to model the denotational semantics of imperative programs?
Some background: I am a fan of Haskell and functional programming and I hope to learn more about both as my studies continue. I understand the benefits of referential transparency, for example. The motivation for this question is that I am a grad student and I will be giving 2 1-hour presentations to a programming languages class, one covering Haskell in particular and the other covering functional programming in general. I suspect that the majority of the class is not familiar with functional programming, maybe having seen a bit of scheme. I hope to be able to (reasonably) clearly explain how monads solve the purity problem without going into category theory and the theoretical underpinnings of monads, which I wouldn't have time to cover and anyway I don't fully understand myself -- certainly not well enough to present.
I wonder if "purity" in this context is not really well-defined?
It's hard to argue conclusively in either direction because "pure" is not particularly well-defined. Certainly, something makes Haskell fundamentally different from other languages, and it's deeply related to managing side-effects and the IO type¹, but it's not clear exactly what that something is. Given a concrete definition to refer to we could just check if it applies, but this isn't easy: such definitions will tend to either not match everyone's expectations or be too broad to be useful.
So what makes Haskell special, then? In my view, it's the separation between evaluation and execution.
The base language—closely related to the λ-caluclus—is all about the former. You work with expressions that evaluate to other expressions, 1 + 1 to 2. No side-effects here, not because they were suppressed or removed but simply because they don't make sense in the first place. They're not part of the model² any more than, say, backtracking search is part of the model of Java (as opposed to Prolog).
If we just stuck to this base language with no added facilities for IO, I think it would be fairly uncontroversial to call it "pure". It would still be useful as, perhaps, a replacement for Mathematica. You would write your program as an expression and then get the result of evaluating the expression at the REPL. Nothing more than a fancy calculator, and nobody accuses the expression language you use in a calculator of being impure³!
But, of course, this is too limiting. We want to use our language to read files and serve web pages and draw pictures and control robots and interact with the user. So the question, then, is how to preserve everything we like about evaluating expressions while extending our language to do everything we want.
The answer we've come up with? IO. A special type of expression that our calculator-like language can evaluate which corresponds to doing some effectful actions. Crucially, evaluation still works just as before, even for things in IO. The effects get executed in the order specified by the resulting IO value, not based on how it was evaluated. IO is what we use to introduce and manage effects into our otherwise-pure expression language.
I think that's enough to make describing Haskell as "pure" meaningful.
footnotes
¹ Note how I said IO and not monads in general: the concept of a monad is immensely useful for dozens of things unrelated to input and output, and the IO types has to be more than just a monad to be useful. I feel the two are linked too closely in common discourse.
² This is why unsafePerformIO is so, well, unsafe: it breaks the core abstraction of the language. This is the same as, say, putzing with specific registers in C: it can both cause weird behavior and stop your code from being portable because it goes below C's level of abstraction.
³ Well, mostly, as long as we ignore things like generating random numbers.
A function with type, for example, a -> IO b always returns an identical IO action when given the same input; it is pure in that it cannot possibly inspect the environment, and obeys all the usual rules for pure functions. This means that, among other things, the compiler can apply all of its usual optimization rules to functions with an IO in their type, because it knows they are still pure functions.
Now, the IO action returned may, when run, look at the environment, read files, modify global state, whatever, all bets are off once you run an action. But you don't necessarily have to run an action; you can put five of them into a list and then run them in reverse of the order in which you created them, or never run some of them at all, if you want; you couldn't do this if IO actions implicitly ran themselves when you created them.
Consider this silly program:
main :: IO ()
main = do
inputs <- take 5 . lines <$> getContents
let [line1,line2,line3,line4,line5] = map print inputs
line3
line1
line2
line5
If you run this, and then enter 5 lines, you will see them printed back to you but in a different order, and with one omitted, even though our haskell program runs map print over them in the order they were received. You couldn't do this with C's printf, because it immediately performs its IO when called; haskell's version just returns an IO action, which you can still manipulate as a first-class value and do whatever you want with.
I see two main differences here:
1) In haskell, you can do things that are not in the IO monad. Why is this good? Because if you have a function definitelyDoesntLaunchNukes :: Int -> IO Int you don't know that the resulting IO action doesn't launch nukes, it might for all you know. cantLaunchNukes :: Int -> Int will definitely not launch any nukes (barring any ugly hacks that you should avoid in nearly all circumstances).
2) In haskell, it's not just a cute analogy: IO actions are first class values. You can put them in lists, and leave them there for as long as you want, they won't do anything unless they somehow become part of the main action. The closest that C has to that are function pointers, which are quite a bit more cumbersome to use. In C++ (and most modern imperative languages really) you have closures which technically could be used for this purpose, but rarely are - mainly because Haskell is pure and they aren't.
Why does that distinction matter here? Well, where are you going to get your other IO actions/closures from? Probably, functions/methods of some description. Which, in an impure language, can themselves have side effects, rendering the attempt of isolating them in these languages pointless.
fiction-mode: Active
It was quite a challenge, and I think a wormhole could be forming in the neighbour's backyard, but I managed to grab part of a Haskell I/O implementation from an alternate reality:
class Kleisli k where
infixr 1 >=>
simple :: (a -> b) -> (a -> k b)
(>=>) :: (a -> k b) -> (b -> k c) -> a -> k c
instance Kleisli IO where
simple = primSimpleIO
(>=>) = primPipeIO
primitive primSimpleIO :: (a -> b) -> (a -> IO b)
primitive primPipeIO :: (a -> IO b) -> (b -> IO c) -> a -> IO c
Back in our slightly-mutilated reality (sorry!), I have used this other form of Haskell I/O to define our form of Haskell I/O:
instance Monad IO where
return x = simple (const x) ()
m >>= k = (const m >=> k) ()
and it works!
fiction-mode: Offline
My question is whether monads in Haskell actually maintain Haskell's purity, and if so how.
The monadic interface, by itself, doesn't maintain restrain the effects - it is only an interface, albeit a jolly-versatile one. As my little work of fiction shows, there are other possible interfaces for the job - it's just a matter of how convenient they are to use in practice.
For an implementation of Haskell I/O, what keeps the effects under control is that all the pertinent entities, be they:
IO, simple, (>=>) etc
or:
IO, return, (>>=) etc
are abstract - how the implementation defines those is kept private.
Otherwise, you would be able to devise "novelties" like this:
what_the_heck =
do spare_world <- getWorld -- how easy was that?
launchMissiles -- let's mess everything up,
putWorld spare_world -- and bring it all back :-D
what_the_heck -- that was fun; let's do it again!
(Aren't you glad our reality isn't quite so pliable? ;-)
This observation extends to types like ST (encapsulated state) and STM (concurrency) and their stewards (runST, atomically etc). For types like lists, Maybe and Either, their orthodox definitions in Haskell means no visible effects.
So when you see an interface - monadic, applicative, etc - for certain abstract types, any effects (if they exist) are contained by keeping its implementation private; safe from being used in aberrant ways.