The question says it all. More specifically, I am writing bindings to a C library, and I'm wondering what c functions I can use unsafePerformIO with. I assume using unsafePerformIO with anything involving pointers is a big no-no.
It would be great to see other cases where it is acceptable to use unsafePerformIO too.
No need to involve C here. The unsafePerformIO function can be used in any situation where,
You know that its use is safe, and
You are unable to prove its safety using the Haskell type system.
For instance, you can make a memoize function using unsafePerformIO:
memoize :: Ord a => (a -> b) -> a -> b
memoize f = unsafePerformIO $ do
memo <- newMVar $ Map.empty
return $ \x -> unsafePerformIO $ modifyMVar memo $ \memov ->
return $ case Map.lookup x memov of
Just y -> (memov, y)
Nothing -> let y = f x
in (Map.insert x y memov, y)
(This is off the top of my head, so I have no idea if there are flagrant errors in the code.)
The memoize function uses and modifies a memoization dictionary, but since the function as a whole is safe, you can give it a pure type (with no use of the IO monad). However, you have to use unsafePerformIO to do that.
Footnote: When it comes to the FFI, you are responsible for providing the types of the C functions to the Haskell system. You can achieve the effect of unsafePerformIO by simply omitting IO from the type. The FFI system is inherently unsafe, so using unsafePerformIO doesn't make much of a difference.
Footnote 2: There are often really subtle bugs in code that uses unsafePerformIO, the example is just a sketch of a possible use. In particular, unsafePerformIO can interact poorly with the optimizer.
In the specific case of the FFI, unsafePerformIO is meant to be used for calling things that are mathematical functions, i.e. the output depends solely on the input parameters, and every time the function is called with the same inputs, it will return the same output. Also, the function shouldn't have side effects, such as modifying data on disk, or mutating memory.
Most functions from <math.h> could be called with unsafePerformIO, for example.
You're correct that unsafePerformIO and pointers don't usually mix. For example, suppose you have
p_sin(double *p) { return sin(*p); }
Even though you're just reading a value from a pointer, it's not safe to use unsafePerformIO. If you wrap p_sin, multiple calls can use the pointer argument, but get different results. It's necessary to keep the function in IO to ensure that it's sequenced properly in relation to pointer updates.
This example should make clear one reason why this is unsafe:
# file export.c
#include <math.h>
double p_sin(double *p) { return sin(*p); }
# file main.hs
{-# LANGUAGE ForeignFunctionInterface #-}
import Foreign.Ptr
import Foreign.Marshal.Alloc
import Foreign.Storable
foreign import ccall "p_sin"
p_sin :: Ptr Double -> Double
foreign import ccall "p_sin"
safeSin :: Ptr Double -> IO Double
main :: IO ()
main = do
p <- malloc
let sin1 = p_sin p
sin2 = safeSin p
poke p 0
putStrLn $ "unsafe: " ++ show sin1
sin2 >>= \x -> putStrLn $ "safe: " ++ show x
poke p 1
putStrLn $ "unsafe: " ++ show sin1
sin2 >>= \x -> putStrLn $ "safe: " ++ show x
When compiled, this program outputs
$ ./main
unsafe: 0.0
safe: 0.0
unsafe: 0.0
safe: 0.8414709848078965
Even though the value referenced by the pointer has changed between the two references to "sin1", the expression isn't re-evaluated, leading to stale data being used. Since safeSin (and hence sin2) is in IO, the program is forced to re-evaluate the expression, so the updated pointer data is used instead.
Obviously if it should never be used, it wouldn't be in the standard libraries. ;-)
There are a number of reasons why you might use it. Examples include:
Initialising global mutable state. (Whether you should ever have such a thing in the first place is a whole other discussion...)
Lazy I/O is implemented using this trick. (Again, whether lazy I/O is a good idea in the first place is debatable.)
The trace function uses it. (Yet again, it turns out trace is rather less useful than you might imagine.)
Perhaps most significantly, you can use it to implement data structures which are referentially transparent, but internally implemented using impure code. Often the ST monad will let you do that, but sometimes you need a little unsafePerformIO.
Lazy I/O can be seen as a special-case of the last point. So can memoisation.
Consider, for example, an "immutable", growable array. Internally you could implement that as a pure "handle" that points to a mutable array. The handle holds the user-visible size of the array, but the actual underlying mutable array is larger than that. When the user "appends" to the array, a new handle is returned, with a new, larger size, but the append is performed by mutating the underlying mutable array.
You can't do this with the ST monad. (Or rather, you can, but it still requires unsafePerformIO.)
Note that it's damned tricky to get this sort of thing right. And the type checker won't catch if it you're wrong. (That's what unsafePerformIO does; it makes the type checker not check that you're doing it correctly!) For example, if you append to an "old" handle, the correct thing to do would be to copy the underlying mutable array. Forget this, and your code will behave very strangely.
Now, to answer your real question: There's no particular reason why "anything without pointers" should be a no-no for unsafePerformIO. When asking whether to use this function or not, the only question of significance is this: Can the end-user observe any side-effects from doing this?
If the only thing it does is create some buffer somewhere that the user can't "see" from pure code, that's fine. If it writes to a file on disk... not so fine.
HTH.
The standard trick to instantiate global mutable variables in haskell:
{-# NOINLINE bla #-}
bla :: IORef Int
bla = unsafePerformIO (newIORef 10)
I also use it to close over the global variable if I want to prevent access to it outside of functions I provide:
{-# NOINLINE printJob #-}
printJob :: String -> Bool -> IO ()
printJob = unsafePerformIO $ do
p <- newEmptyMVar
return $ \a b -> do
-- here's the function code doing something
-- with variable p, no one else can access.
The way I see it, the various unsafe* nonfunctions really should only be used in cases where you want to do something that respects referential transparency but whose implementation would otherwise require augmenting the compiler or runtime system to add a new primitive capability. It's easier, more modular, readable, maintainable and agile to use the unsafe stuff than to have to modify the language implementation for things like that.
FFI work often intrinsically requires you to do this sort of thing.
Sure. You can have a look at a real example here but in general, unsafePerformIO is usable on any pure function that happens to be side effecting. The IO monad may still be needed to track effects (e.g. freeing memory after the value is computed) even when the function is pure (e.g computing a factorial).
I'm wondering what c functions I can use unsafePerformIO with. I assume using unsafePerformIO with anything involving pointers is a big no-no.
Depends! unsafePerformIO will fully perform actions and force out all the laziness, but that doesn't mean it will break your program. In general, Haskellers prefer unsafePerformIO to appear only in pure functions, so you can use it on results of e.g. scientific computations but maybe not file reads.
Related
Regarding Haskell's monadic IO construct:
Is it just a convention or is there is a implementation reason for it?
Could you not just FFI into libc.so instead to do your I/O, and skip the whole IO-monad thing?
Would it work anyway, or is the outcome nondeterministic because of:
(a) Haskell's lazy evaluation?
(b) another reason, like that the GHC is pattern-matching for the IO monad and then handling it in a special way (or something else)?
What is the real reason - in the end you end up using a side effect, so why not do it the simple way?
Yes, monadic I/O is a consequence of Haskell being lazy. Specifically, though, monadic I/O is a consequence of Haskell being pure, which is effectively necessary for a lazy language to be predictable.†
This is easy to illustrate with an example. Imagine for a moment that Haskell were not pure, but it was still lazy. Instead of putStrLn having the type String -> IO (), it would simply have the type String -> (), and it would print a string to stdout as a side-effect. The trouble with this is that this would only happen when putStrLn is actually called, and in a lazy language, functions are only called when their results are needed.
Here’s the trouble: putStrLn produces (). Looking at a value of type () is useless, because () means “boring”. That means that this program would do what you expect:
main :: ()
main =
case putStr "Hello, " of
() -> putStrLn " world!"
-- prints “Hello, world!\n”
But I think you can agree that programming style is pretty odd. The case ... of is necessary, however, because it forces the evaluation of the call to putStr by matching against (). If you tweak the program slightly:
main :: ()
main =
case putStr "Hello, " of
_ -> putStrLn " world!"
…now it only prints world!\n, and the first call isn’t evaluated at all.
This actually gets even worse, though, because it becomes even harder to predict as soon as you start trying to do any actual programming. Consider this program:
printAndAdd :: String -> Integer -> Integer -> Integer
printAndAdd msg x y = putStrLn msg `seq` (x + y)
main :: ()
main =
let x = printAndAdd "first" 1 2
y = printAndAdd "second" 3 4
in (y + x) `seq` ()
Does this program print out first\nsecond\n or second\nfirst\n? Without knowing the order in which (+) evaluates its arguments, we don’t know. And in Haskell, evaluation order isn’t even always well-defined, so it’s entirely possible that the order in which the two effects are executed is actually completely impossible to determine!
This problem doesn’t arise in strict languages with a well-defined evaluation order, but in a lazy language like Haskell, we need some additional structure to ensure side-effects are (a) actually evaluated and (b) executed in the correct order. Monads happen to be an interface that elegantly provide the necessary structure to enforce that order.
Why is that? And how is that even possible? Well, the monadic interface provides a notion of data dependency in the signature for >>=, which enforces a well-defined evaluation order. Haskell’s implementation of IO is “magic”, in the sense that it’s implemented in the runtime, but the choice of the monadic interface is far from arbitrary. It seems to be a fairly good way to encode the notion of sequential actions in a pure language, and it makes it possible for Haskell to be lazy and referentially transparent without sacrificing predictable sequencing of effects.
It’s worth noting that monads are not the only way to encode side-effects in a pure way—in fact, historically, they’re not even the only way Haskell handled side-effects. Don’t be misled into thinking that monads are only for I/O (they’re not), only useful in a lazy language (they’re plenty useful to maintain purity even in a strict language), only useful in a pure language (many things are useful monads that aren’t just for enforcing purity), or that you needs monads to do I/O (you don’t). They do seem to have worked out pretty well in Haskell for those purposes, though.
† Regarding this, Simon Peyton Jones once noted that “Laziness keeps you honest” with respect to purity.
Could you just FFI into libc.so instead to do IO and skip the IO Monad thing?
Taking from https://en.wikibooks.org/wiki/Haskell/FFI#Impure_C_Functions, if you declare an FFI function as pure (so, with no reference to IO), then
GHC sees no point in calculating twice the result of a pure function
which means the the result of the function call is effectively cached. For example, a program where a foreign impure pseudo-random number generator is declared to return a CUInt
{-# LANGUAGE ForeignFunctionInterface #-}
import Foreign
import Foreign.C.Types
foreign import ccall unsafe "stdlib.h rand"
c_rand :: CUInt
main = putStrLn (show c_rand) >> putStrLn (show c_rand)
returns the same thing every call, at least on my compiler/system:
16807
16807
If we change the declaration to return a IO CUInt
{-# LANGUAGE ForeignFunctionInterface #-}
import Foreign
import Foreign.C.Types
foreign import ccall unsafe "stdlib.h rand"
c_rand :: IO CUInt
main = c_rand >>= putStrLn . show >> c_rand >>= putStrLn . show
then this results in (probably) a different number returned each call, since the compiler knows it's impure:
16807
282475249
So you're back to having to use IO for the calls to the standard libraries anyway.
Let's say using FFI we defined a function
c_write :: String -> ()
which lies about its purity, in that whenever its result is forced it prints the string. So that we don't run into the caching problems in Michal's answer, we can define these functions to take an extra () argument.
c_write :: String -> () -> ()
c_rand :: () -> CUInt
On an implementation level this will work as long as CSE is not too aggressive (which it is not in GHC because that can lead to unexpected memory leaks, it turns out). Now that we have things defined this way, there are many awkward usage questions that Alexis points out—but we can solve them using a monad:
newtype IO a = IO { runIO :: () -> a }
instance Monad IO where
return = IO . const
m >>= f = IO $ \() -> let x = runIO m () in x `seq` f x
rand :: IO CUInt
rand = IO c_rand
Basically, we just stuff all of Alexis's awkward usage questions into a monad, and as long as we use the monadic interface, everything stays predictable. In this sense IO is just a convention—because we can implement it in Haskell there is nothing fundamental about it.
That's from the operational vantage point.
On the other hand, Haskell's semantics in the report are specified using denotational semantics alone. And, in my opinion, the fact that Haskell has a precise denotational semantics is one of the most beautiful and useful qualities of the language, allowing me a precise framework to think about abstractions and thus manage complexity with precision. And while the usual abstract IO monad has no accepted denotational semantics (to the lament of some of us), it is at least conceivable that we could create a denotational model for it, thus preserving some of the benefits of Haskell's denotational model. However, the form of I/O we have just given is completely incompatible with Haskell's denotational semantics.
Simply put, there are only supposed to be two distinguishable values (modulo fatal error messages) of type (): () and ⊥. If we treat FFI as the fundamentals of I/O and use the IO monad only "as a convention", then we effectively add a jillion values to every type—to continue having a denotational semantics, every value must be adjoined with the possibility of performing I/O prior to its evaluation, and with the extra complexity this introduces, we essentially lose all our ability to consider any two distinct programs equivalent except in the most trivial cases—that is, we lose our ability to refactor.
Of course, because of unsafePerformIO this is already technically the case, and advanced Haskell programmers do need to think about the operational semantics as well. But most of the time, including when working with I/O, we can forget about all that and refactor with confidence, precisely because we have learned that when we use unsafePerformIO, we must be very careful to ensure it plays nicely, that it still affords us as much denotational reasoning as possible. If a function has unsafePerformIO, I automatically give it 5 or 10 times more attention than regular functions, because I need to understand the valid patterns of use (usually the type signature tells me everything I need to know), I need to think about caching and race conditions, I need to think about how deep I need to force its results, etc. It's awful[1]. The same care would be necessary of FFI I/O.
In conclusion: yes it's a convention, but if you don't follow it then we can't have nice things.
[1] Well actually I think it's pretty fun, but it's surely not practical to think about all those complexities all the time.
That depends on what the meaning of "is" is—or at least what the meaning of "convention" is.
If a "convention" means "the way things are usually done" or "an agreement among parties covering a particular matter" then it is easy to give a boring answer: yes, the IO monad is a convention. It is the way the designers of the language agreed to handle IO operations and the way that users of the language usually perform IO operations.
If we are allowed to choose a more interesting definition of "convention" then we can get a more interesting answer. If a "convention" is a discipline imposed on a language by its users in order to achieve a particular goal without assistance from the language itself, then the answer is no: the IO monad is the opposite of a convention. It is a discipline enforced by the language that assists its users in constructing and reasoning about programs.
The purpose of the IO type is to create a clear distinction between the types of "pure" values and the types of values which require execution by the runtime system to generate a meaningful result. The Haskell type system enforces this strict separation, preventing a user from (say) creating a value of type Int which launches the proverbial missiles. This is not a convention in the second sense: its entire goal is to move the discipline required to perform side effects in a safe and consistent way from the user and onto the language and its compiler.
Could you just FFI into libc.so instead to do IO and skip the IO Monad thing?
It is, of course, possible to do IO without an IO monad: see almost every other extant programming language.
Would it work anyway or is the outcome undeterministic because of Haskell evaluating lazy or something else, like that the GHC is pattern matching for IO Monad and then handling it in a special way or something else.
There is no such thing as a free lunch. If Haskell allowed any value to require execution involving IO then it would have to lose other things that we value. The most important of these is probably referential transparency: if myInt could sometimes be 1 and sometimes be 5 depending on external factors then we would lose most of our ability to reason about our programs in a rigorous way (known as equational reasoning).
Laziness was mentioned in other answers, but the issue with laziness would specifically be that sharing would no longer be safe. If x in let x = someExpensiveComputationOf y in x * x was not referentially transparent, GHC would not be able to share the work and would have to compute it twice.
What is the real reason?
Without the strict separation of effectful values from non-effectful values provided by IO and enforced by the compiler, Haskell would effectively cease to be Haskell. There are plenty of languages that don't enforce this discipline. It would be nice to have at least one around that does.
In the end you end you endup in a sideeffect. So why not do it the simple way?
Yes, in the end your program is represented by a value called main with an IO type. But the question isn't where you end up, it's where you start: If you start by being able to differentiate between effectful and non-effectful values in a rigorous way then you gain a lot of advantages when constructing that program.
What is the real reason - in the end you end up using a side effect, so why not do it the simple way?
...you mean like Standard ML? Well, there's a price to pay - instead of being able to write:
any :: (a -> Bool) -> [a] -> Bool
any p = or . map p
you would have to type out this:
any :: (a -> Bool) -> [a] -> Bool
any p [] = False
any p (y:ys) = y || any p ys
Could you not just FFI into libc.so instead to do your I/O, and skip the whole IO-monad thing?
Let's rephrase the question:
Could you not just do I/O like Standard ML, and skip the whole IO-monad thing?
...because that's effectively what you would be trying to do. Why "trying"?
SML is strict, and relies on sytactic ordering to specify the order of evaluation everywhere;
Haskell is non-strict, and relies on data dependencies to specify the order of evaluation for certain expressions e.g. I/O actions.
So:
Would it work anyway, or is the outcome nondeterministic because of:
(a) Haskell's lazy evaluation?
(a) - the combination of non-strict semantics and visible effects is generally useless. For an amusing exhibition of just how useless this combination can be, watch this presentation by Erik Meiyer (the slides can be found here).
So I started to wrap my head around Monads (used in Haskell). I'm curious what other ways IO or state can be handled in a pure functional language (both in theory or reality). For example, there is a logical language called "mercury" that uses "effect-typing". In a program such as haskell, how would effect-typing work? How does other systems work?
There are several different questions involved here.
First, IO and State are very different things. State is easy to do
yourself: Just pass an extra argument to every function, and return an extra
result, and you have a "stateful function"; for example, turn a -> b into
a -> s -> (b,s).
There's no magic involved here: Control.Monad.State provides a wrapper that
makes working with "state actions" of the form s -> (a,s) convenient, as well
as a bunch of helper functions, but that's it.
I/O, by its nature, has to have some magic in its implementation. But there are
a lot of ways of expressing I/O in Haskell that don't involve the word "monad".
If we had an IO-free subset of Haskell as-is, and we wanted to invent IO from
scratch, without knowing anything about monads, there are many things we might
do.
For example, if all we want to do is print to stdout, we might say:
type PrintOnlyIO = String
main :: PrintOnlyIO
main = "Hello world!"
And then have an RTS (runtime system) which evaluates the string and prints it.
This lets us write any Haskell program whose I/O consists entirely of printing
to stdout.
This isn't very useful, however, because we want interactivity! So let's invent
a new type of IO which allows for it. The simplest thing that comes to mind is
type InteractIO = String -> String
main :: InteractIO
main = map toUpper
This approach to IO lets us write any code which reads from stdin and writes to
stdout (the Prelude comes with a function interact :: InteractIO -> IO ()
which does this, by the way).
This is much better, since it lets us write interactive programs. But it's
still very limited compared to all the IO we want to do, and also quite
error-prone (if we accidentally try to read too far into stdin, the program
will just block until the user types more in).
We want to be able to do more than read stdin and write stdout. Here's how
early versions of Haskell did I/O, approximately:
data Request = PutStrLn String | GetLine | Exit | ...
data Response = Success | Str String | ...
type DialogueIO = [Response] -> [Request]
main :: DialogueIO
main resps1 =
PutStrLn "what's your name?"
: GetLine
: case resps1 of
Success : Str name : resps2 ->
PutStrLn ("hi " ++ name ++ "!")
: Exit
When we write main, we get a lazy list argument and return a lazy list as a
result. The lazy list we return has values like PutStrLn s and GetLine;
after we yield a (request) value, we can examine the next element of the
(response) list, and the RTS will arrange for it to be the response to our
request.
There are ways to make working with this mechanism nicer, but as you can
imagine, the approach gets pretty awkward pretty quickly. Also, it's
error-prone in the same way as the previous one.
Here's another approach which is much less error-prone, and conceptually very
close to how Haskell IO actually behaves:
data ContIO = Exit | PutStrLn String ContIO | GetLine (String -> ContIO) | ...
main :: ContIO
main =
PutStrLn "what's your name?" $
GetLine $ \name ->
PutStrLn ("hi " ++ name ++ "!") $
Exit
The key is that instead of taking a "lazy list" of responses as one big
argument at he beginning of main, we make individual requests that accept one
argument at a time.
Our program is now just a regular data type -- a lot like a linked list, except
you can't just traverse it normally: When the RTS interprets main, sometimes
it encounters a value like GetLine which holds a function; then it has to get
a string from stdin using RTS magic, and pass that string to the function,
before it can continue. Exercise: Write interpret :: ContIO -> IO ().
Note that none of these implementations involve "world-passing".
"world-passing" isn't really how I/O works in Haskell. The actual
implementation of the IO type in GHC involves an internal type called
RealWorld, but that's only an implementation detail.
Actual Haskell IO adds a type parameter so we can write actions that
"produce" arbitrary values -- so it looks more like data IO a = Done a |
PutStr String (IO a) | GetLine (String -> IO a) | .... That gives us more
flexibility, because we can create "IO actions" that produce arbitrary
values.
(As Russell O'Connor points out,
this type is just a free monad. We can write a Monad instance for it easily.)
Where do monads come into it, then? It turns out that we don't need Monad for
I/O, and we don't need Monad for state, so why do we need it at all? The
answer is that we don't. There's nothing magical about the type class Monad.
However, when we work with IO and State (and lists and functions and
Maybe and parsers and continuation-passing style and ...) for long enough, we
eventually figure out that they behave pretty similarly in some ways. We might
write a function that prints every string in a list, and a function that runs
every stateful computation in a list and threads the state through, and they'll
look very similar to each other.
Since we don't like writing a lot of similar-looking code, we want a way to
abstract it; Monad turns out to be a great abstraction, because it lets us
abstract many types that seem very different, but still provide a lot of useful
functionality (including everything in Control.Monad).
Given bindIO :: IO a -> (a -> IO b) -> IO b and returnIO :: a -> IO a, we
could write any IO program in Haskell without ever thinking about monads. But
we'd probably end up replicating a lot of the functions in Control.Monad,
like mapM and forever and when and (>=>).
By implementing the common Monad API, we get to use the exact same code for
working with IO actions as we do with parsers and lists. That's really the only
reason we have the Monad class -- to capture the similarities between
different types.
Another major approach is uniqueness typing, as in Clean. The short story is that handles to state (including the real world) can only be used once, and functions that access mutable state return a new handle. This means that an output of the first call is an input of a second, forcing the sequential evaluation.
Effect typing is used in the Disciple Compiler for Haskell, but to the best of my knowledge it would take considerable compiler work to enable it in, say, GHC. I shall leave discussion of the details to those better-informed than myself.
Well, first what is state? It can manifest as a mutable variable, which you don't have in Haskell. You only have memory references (IORef, MVar, Ptr, etc.) and IO/ST actions to act on them.
However, state itself can be pure as well. To acknowledge that review the 'Stream' type:
data Stream a = Stream a (Stream a)
This is a stream of values. However an alternative way to interpret this type is a changing value:
stepStream :: Stream a -> (a, Stream a)
stepStream (Stream x xs) = (x, xs)
This gets interesting when you allow two streams to communicate. You then get the automaton category Auto:
newtype Auto a b = Auto (a -> (b, Auto a b))
This is really like Stream, except that now at every instant the stream gets some input value of type a. This forms a category, so one instant of a stream can get its value from the same instant of another stream.
Again a different interpretation of this: You have two computations that change over time and you allow them to communicate. So every computation has local state. Here is a type that is isomorphic to Auto:
data LS a b =
forall s.
LS s ((a, s) -> (b, s))
Take a look at A History of Haskell: Being Lazy With Class. It describes two different approaches to doing I/O in Haskell, before monads were invented: continuations and streams.
There is an approach called Functional Reactive Programming that represents time-varying values and/or event streams as a first-class abstraction. A recent example that comes to my mind is Elm (it is written in Haskell and has a syntax similar to Haskell).
I'm curious - what other ways I/O or state can be handled in a pure functional language (both in theory or reality)?
I'll just add to what's already been mentioned here (note: some of these approaches don't seem to have one, so there are a few "improvised names").
Approaches with freely-available descriptions or implementations:
"Orthogonal directives" - see An alternative approach to I/O by Maarten Fokkinga and Jan Kuper.
Pseudodata - see Nondeterminism with Referential Transparency in Functional Programming Languages by F. Warren Burton. The approach is used by Dave Harrison to implement clocks in his thesis
Functional Real-Time Programming: The Language Ruth And Its Semantics, and name supplies in the functional pearl On generating unique names by Lennart Augustsson, Mikael Rittri and Dan Synek; there are also a few library implementations in Hackage.
Witnesses - see Witnessing Side Effects by Tachio Terauchi and Alex Aiken.
Observers - see Assignments for Applicative Languages by Vipin Swarup, Uday S. Reddy and Evan Ireland.
Other approaches - references only:
System tokens:
L. Augustsson. Functional I/O Using System Tokens. PMG Memo 72, Dept Computer Science, Chalmers University of Technology, S-412 96 Göteborg, 1989.
"Effect trees":
Rebelsky S.A. (1992) I/O trees and interactive lazy functional programming. In: Bruynooghe M., Wirsing M. (eds) Programming Language Implementation and Logic Programming. PLILP 1992. Lecture Notes in Computer Science, vol 631. Springer, Berlin, Heidelberg.
here is some food for thought.
When I write monadic code, the monad imposes ordering on the operations done. For example, If I write in the IO monad:
do a <- doSomething
b <- doSomethingElse
return (a + b)
I know doSomething will be executed before doSomethingElse.
Now, consider the equivalent code in a language like C:
return (doSomething() + doSomethingElse());
The semantics of C don't actually specify what order these two function calls will be evaluated, so the compiler is free to move things around as it pleases.
My question is this: How would I write monadic code in Haskell that also leaves this evaluation order undefined? Ideally, I would reap some benefits when my compiler's optimizer looks at the code and starts moving things around.
Here are some possible techniques that don't get the job done, but are in the right "spirit":
Write the code in functorial style, that is, write plus doSomething doSomethingElse and let plus schedule the monadic calls. Drawbacks: You lose sharing on the results of the monadic actions, and plus still makes a decision about when things end up being evaluated.
Use lazy IO, that is, unsafeInterleaveIO, which defers the scheduling to the demands lazy of evaluation. But lazy is different from strict with undefined order: in particular I do want all of my monadic actions to get executed.
Lazy IO, combined with immediately seq'ing all of the arguments. In particular, seq does not impose ordering constraints.
In this sense, I want something more flexible than monadic ordering but less flexible than full-out laziness.
This problem of over-sequentializing monad code is known as the "commutative monads problem".
Commutative monads are monads for which the order of actions makes no difference (they commute), that is when following code:
do a <- f x
b <- g y
m a b
is the same as:
do b <- g y
a <- f x
m a b
there are many monads that commute (e.g. Maybe, Random). If the monad is commutative, then the operations captured within it can be computed in parallel, for example. They are very useful!
However, we don't have a good syntax for monads that commute, though a lot of people have asked for such a thing -- it is still an open research problem.
As an aside, applicative functors do give us such freedom to reorder computations, however, you have to give up on the notion of bind (as suggestions for e.g. liftM2 show).
This is a deep dirty hack, but it seems like it should do the trick to me.
{-# OPTIONS_GHC -fglasgow-exts #-}
{-# LANGUAGE MagicHash #-}
module Unorder where
import GHC.Types
unorder :: IO a -> IO b -> IO (a, b)
unorder (IO f) (IO g) = IO $ \rw# ->
let (# _, x #) = f rw#
(# _, y #) = g rw#
in (# rw# , (x,y) #)
Since this puts non-determinism in the hands of the compiler, it should behave "correctly" (i.e. nondeterministically) with regards to control flow issues (i.e. exceptions) as well.
On the other hand, we can't pull the same trick most standard monads such as State and Either a since we're really relying on spooky action at a distance available via messing with the RealWorld token. To get the right behavior, we'd need some annotation available to the optimizer that indicated we were fine with nondeterministic choice between two non-equivalent alternatives.
The semantics of C don't actually specify what order these two function calls will be evaluated, so the compiler is free to move things around as it pleases.
But what if doSomething() causes a side-effect that will change the behavior of doSomethingElse()? Do you really want the compiler to mess with the order? (Hint: no) The fact that you are in a monad at all suggests that such may be the case. Your note that "you lose sharing on the results" also suggests this.
However, note that monadic does not always mean sequenced. It's not exactly what you describe, but you may be interested in the Par monad, which allows you to run your actions in parallel.
You are interested in leaving the order undefined so that the compiler can magically optimize it for you. Perhaps instead you should use something like the Par monad to indicate the dependencies (some things inevitably have to happen before others) and then let the rest run in parallel.
Side note: don't confuse Haskell's return to be anything like C's return
I've had the IO monad described to me as a State monad where the state is "the real world". The proponents of this approach to IO argue that this makes IO operations pure, as in referentially transparent. Why is that? From my perspective it appears that code inside the IO monad have plenty of observable side effects. Also, isn't it possible to describe pretty much any non-pure function like a function of the real world? For example, can't we think of, say, C's malloc as being a function that takes a RealWorld and an Int and returns a pointer and a RealWorld, only just like in the IO monad the RealWorld is implicit?
Note: I know what a monad is and how it's used. Please don't respond with a link to a random monad tutorial unless it specifically adresses my question.
I think the best explanation I've heard was actually fairly recently on SO. IO Foo is a recipe for creating a Foo. Another common, more literal, way of saying this is that it is a "program that produces a Foo". It can be executed (many times) to create a Foo or die trying. The execution of the recipe/program is what we ultimately want (otherwise, why write one?), but the thing that is represented by an IO action in our code is the recipe itself.
That recipe is a pure value, in the same exact sense that a String is a pure value. Recipes can be combined and manipulated in interesting, sometimes astonishing, ways, but the many ways these recipes can be combined (except for the blatantly non-pure unsafePerformIO, unsafeCoerce, etc.) are all completely referentially transparent, deterministic, and all that nice stuff. The resulting recipe depends in absolutely no way whatsoever on the state of anything other than the recipes that it was built up from.
Also, isn't it possible to describe pretty much any non-pure function like a function of the real world? For example, can't we think of, say, C's malloc as being a function that takes a RealWorld and an Int and returns a pointer and a RealWorld, only just like in the IO monad the RealWorld is implicit?
For sure ...
The whole idea of functional programming is to describe programs as a combination of small, independent calculations building up bigger computations.
Having these independent calculations, you'll have lots of benefits, reaching from concise programs to efficient and efficiently parallelizable codes, laziness up to the the rigorous guarantee that control flows as intended - with no chance of interference or corruption of arbitrary data.
Now - in some cases (like IO), we need impure code. Calculations involving such operations cannot be independent - they could mutate arbitrary data of another computation.
The point is - Haskell is always pure, IO doesn't change this.
So, our impure, non-independent codes have to get a common dependency - we have to pass a RealWorld. So whatever stateful computation we want to run, we have to pass this RealWorld thing to apply our changes to - and whatever other stateful computation wants to see or make changes has to know the RealWorld too.
Whether this is done explicitly or implicitly through the IO monad is irrelevant. You build up a Haskell program as a giant computation that transforms data, and one part of this data is the RealWorld.
Once the initial main :: IO () gets called when your program is run with the current real world as a parameter, this real world gets carried through all impure calculations involved, just as data would in a State. That's what monadic >>= (bind) takes care of.
And where the RealWorld doesn't get (as in pure computations or without any >>=-ing to main), there is no chance of doing anything with it. And where it does get, that happened by purely functional passing of an (implicit) parameter. That's why
let foo = putStrLn "AAARGH" in 42
does absolutely nothing - and why the IO monad - like anything else - is pure. What happens inside this code can of course be impure, but it's all caught inside, with no chance of interfering with non-connected computations.
Suppose we have something like:
animatePowBoomWhenHearNoiseInMicrophone :: TimeDiff -> Sample -> IO ()
animatePowBoomWhenHearNoiseInMicrophone
levelWeightedAverageHalfLife levelThreshord = ...
programA :: IO ()
programA = animatePowBoomWhenHearNoiseInMicrophone 3 10000
programB :: IO ()
programB = animatePowBoomWhenHearNoiseInMicrophone 3 10000
Here's a point of view:
animatePowBoomWhenHearNoiseInMicrophone is a pure function in the sense that its results for same input, programA and programB, are exactly the same. You can do main = programA or main = programB and it would be exactly the same.
animatePowBoomWhenHearNoiseInMicrophone is a function receiving two arguments and resulting in a description of a program. The Haskell runtime can execute this description if you set main to it or otherwise include it in main via binding.
What is IO? IO is a DSL for describing imperative programs, encoded in "pure-haskell" data structures and functions.
"complete-haskell" aka GHC is an implementation of both "pure-haskell", and an imperative implementation of an IO decoder/executer.
It quite simply comes down to extensional equality:
If you were to call getLine twice, then both calls would return an IO String which would look exactly the same on the outside each time. If you were to write a function to take 2 IO Strings and return a Bool to signal a detected difference between them both, it would not be possible to detect any difference from any observable properties. It could not ask any other function whether they are equal and any attempt at using >>= must also return something in IO which all are equall externally.
I'll let Martin Odersky answer this
The IO monad does not make a function pure. It just makes it obvious
that it's impure.
Sounds clear enough.
Even though its title is a bit weird (in that it doesn't precisely match the content) the following haskell-cafe thread contains a nice discussion about different IO models for Haskell.
http://www.mail-archive.com/haskell-cafe#haskell.org/msg79613.html
Well, this is what we have been taught at college -
Function is referentially transparent when it always returns the same value for specified input (or the same expression always evaluates to same value in the same context). Therefore, for example getChar would not be referentially transparent if it had type signature just () -> Char or Char, because you can get different results if you call this function multiple times with the same argument.
But, if you introduce IO monad, then getChar can have type IO Char and this type has only one single value - IO Char. So getChar allways reutrns the same value, no matter on which key user really pressed.
But you are still able to "get" the underlying value from this IO Char thing. Well, not really get, but pass to another function using bind operator (>>=), so you can work with the Char that user entered further in your program.
Philip Wadler writes:
In an impure language, an operation like tick would be represented by a function of type () -> (). The spurious argument () is required to delay the effect until the function is applied, and since the output type is () one may guess that the function's purpose lies in a side effect. In contrast, here tick has type M (): no spurious argument is needed, and the appearance of M explicitly indicates what sort of effect may occur.
I fail to understand how M () makes the empty argument list () less spurious but Wadler is pretty clear that monads just indicate a kind of side-effect, they do not eliminate it.
In what sense is the monadic IO type pure?
In the sense that values of the IO type are portions of Standard ML abstract imperative code which ideally can only be processed by the RTS of a Haskell implementation - in How to Declare an Imperative, Philip Wadler provides a hint as to how this is possible:
(* page 26 *)
type 'a io = unit -> 'a
infix >>=
val >>= : 'a io * ('a -> 'b io) -> 'b io
fun m >>= k = fn () => let
val x = m ()
val y = k x ()
in
y
end
val return : 'a -> 'a io
fun return x = fn () => x
(* page 27 *)
val execute : unit io -> unit
fun execute m = m ()
However, not everyone finds this situation acceptable:
[...] a state-less model of computation on top of a machinery whose most
eminent characteristic is state [means] the gap between model and machinery is wide, and therefore costly to bridge. [...]
This has in due time also been recognized by the protagonists of functional
languages. They have introduced state (and variables) in various tricky ways.
The purely functional character has thereby been compromised and sacrificed. [...]
Niklaus Wirth.
...anyone for Miranda(R)?
I've had the IO monad described to me as a State monad where the state is "the real world".
That would be the classic pass-the-planet model of I/O, which Clean uses directly:
import StdFile
import StdMisc
import StdString
Start :: *World -> *World
Start w = putString "Hello, world!\n" w
putString :: String *World -> *World
putString str world
# (out, world1) = stdio world
# out1 = fwrites str out
# (_, world2) = fclose out1 world1
= world2
putChar :: Char *World -> *World
putChar c w = putString {c} w
The proponents of this approach to I/O argue that this makes I/O operations pure, as in referentially transparent. Why is that?
Because it's usually correct.
From the standard Haskell 2010 library module Data.List:
mapAccumL _ s [] = (s, [])
mapAccumL f s (x:xs) = (s'',y:ys)
where (s', y ) = f s x
(s'',ys) = mapAccumL f s' xs
If this idiom is so common that it has specific definitions to support it, then its use as a model of I/O (with a suitable state-type) is really no great surprise - from pages 14-15 of State in Haskell by John Launchbury and Simon Peyton Jones:
How, then, are I/O operations executed at all? The meaning of the whole program
is given by the value of the top-level identifier mainIO:
mainIO :: IO ()
mainIO is an I/O state transformer, which is applied to the external world state by
the operating system. Semantically speaking, it returns a new world state, and the
changes embodied therein are applied to the real world.
(...back when main was called mainIO.)
The most recent Clean Language Report (I/O on the Unique World on page 24 of 148) goes into more detail:
The world which is given to the initial expression is an abstract data structure, an abstract world of type *World which
models the concrete physical world as seen from the program. The abstract world can in principle contain
anything what a functional program needs to interact during execution with the concrete world. The world can be seen as a
state and modifications of the world can be realized via state transition functions defined on the world or a part of the world. By
requiring that these state transition functions work on a unique world the modifications of the abstract world can directly be
realized in the real physical world, without loss of efficiency and without losing referential transparency.
In terms of semantics, the crucial point is this: for an I/O-centric program's changes to take effect, that program must return the final world-state value.
Now consider this small Clean program:
Start :: *World -> *World
Start w = loopX w
loopX :: *World -> *World
loopX w
# w1 = putChar 'x' w
= loopX w1
Obviously the final World value is never returned so 'x' should not be seen at all...
Also, isn't it possible to describe pretty much any non-pure function like a function of the real world?
Yes; that's more-or-less how the FFI works in Haskell 2010.
From my perspective it appears that code inside the monadic IO type have plenty of observable side effects.
If you're using GHC, it isn't an appearance - from
A History of Haskell (page 26 of 55) by Paul Hudak, John Hughes, Simon Peyton Jones, and Philip Wadler:
Of course, GHC does not actually pass the world around; instead, it passes a dummy “token,” to ensure proper sequencing of actions in the presence of lazy evaluation, and performs input and output as actual side effects!
But that's merely an implementation detail:
An IO computation is a function that (logically) takes the state of the world, and returns a modified world as well as the return value.
Logic doesn't apply to the real world.
Marvin Lee Minsky.
I'm trying a little experiment in haskell, wondering if it is possible to exploit laziness to process IO. I'd like to write a function that takes a String (a list of Chars) and produces a string, lazily. I would like then to be abily to lazily feed it characters from IO, so each character would be processed as soon as it was available, and the output would be produced as the characters necessary became available. However, I'm not quite sure if/how I can produce a lazy list of characters from input inside the IO monad.
Regular String IO in Haskell is lazy. So your example should just work out of the box.
Here's an example, using the 'interact' function, which applies a function to a lazy stream of characters:
interact :: (String -> String) -> IO ()
Let's filter out the letter 'e' from the input stream, lazily (i.e. run in constant space):
main = interact $ filter (/= 'e')
You could also use getContents and putStr if you like. They're all lazy.
Running it to filter the letter 'e' from the dictionary:
$ ghc -O2 --make A.hs
$ ./A +RTS -s < /usr/share/dict/words
...
2 MB total memory in use (0 MB lost due to fragmentation)
...
so we see that it ran in a constant 2M footprint.
The simplest method of doing lazy IO involves functions such as interact, readFile, hGetContents, and such, as dons says; there's a more extended discussion of these in the book Real World Haskell that you might find useful. If memory serves me, all such functions are eventually implemented using the unsafeInterleaveIO that ephemient mentions, so you can also build your own functions that way if you want.
On the other hand, it might be wise to note that unsafeInterleaveIO is exactly what it says on the tin: unsafe IO. Using it--or functions based on it--breaks purity and referential transparency. This allows apparently pure functions (that is, that do not return an IO action) to effect the outside world when evaluated, produce different results from the same arguments, and all those other unpleasant things. In practice, most sensible ways of using unsafeInterleaveIO won't cause problems, and simple mistakes will usually result in obvious and easily diagnosed bugs, but you've lost some nice guarantees.
There are alternatives, of course; you can find assorted libraries on Hackage that provide restricted, safer lazy IO or conceptually different approaches. However, given that problems arise only rarely in practical use, I think most people are inclined to stick with the built-in, technically unsafe functions.
unsafeInterleaveIO :: IO a -> IO a
unsafeInterleaveIO allos IO computation to be deferred lazily. When passed a value of type IO a, the IO will only be performed when the value of a is demanded. This is used to implement lazy file reading, see System.IO.hGetContents.
For example, main = getContents >>= return . map Data.Char.toUpper >>= putStr is lazy; as you feed characters to stdin, you will get characters on stdout.
(This is the same as writing main = interact $ map Data.Char.toUpper, as in dons's answer.)