How do laziness and I/O work together in Haskell? - haskell

I'm trying to get a deeper understanding of laziness in Haskell.
I was imagining the following snippet today:
data Image = Image { name :: String, pixels :: String }
image :: String -> IO Image
image path = Image path <$> readFile path
The appeal here is that I could simply create an Image instance and pass it around; if I need the image data it would be read lazily - if not, the time and memory cost of reading the file would be avoided:
main = do
image <- image "file"
putStrLn $ length $ pixels image
But is that how it actually works? How is laziness compatible with IO? Will readFile be called regardless of whether I access pixels image or will the runtime leave that thunk unevaluated if I never refer to it?
If the image is indeed read lazily, then isn't it possible I/O actions could occur out of order? For example, what if immediately after calling image I delete the file? Now the putStrLn call will find nothing when it tries to read.

How is laziness compatible with I/O?
Short answer: It isn't.
Long answer: IO actions are strictly sequenced, for pretty much the reasons you're thinking of. Any pure computations done with the results can be lazy, of course; for instance if you read in a file, do some processing, and then print out some of the results, it's likely that any processing not needed by the output won't be evaluated. However, the entire file will be read, even parts you never use. If you want lazy I/O, you have roughly two options:
Roll your own explicit lazy-loading routines and such, like you would in any strict language. Seems annoying, granted, but on the other hand Haskell makes a fine strict, imperative language. If you want to try something new and interesting, try looking at Iteratees.
Cheat like a cheating cheater. Functions such as hGetContents will do lazy, on-demand I/O for you, no questions asked. What's the catch? It (technically) breaks referential transparency. Pure code can indirectly cause side effects, and funny things can happen involving ordering of side effects if your code is really convoluted. hGetContents and friends are implemented using unsafeInterleaveIO, which is... exactly what it says on the tin. It's nowhere near as likely to blow up in your face as using unsafePerformIO, but consider yourself warned.

Lazy I/O breaks Haskell's purity. The results from readFile are indeed produced lazily, on demand. The order in which I/O actions occur is not fixed, so yes, they could occur "out of order". The problem of deleting the file before pulling the pixels is real. In short, lazy I/O is a great convenience, but it's a tool with very sharp edges.
The book on Real World Haskell has a lengthy treatment of lazy I/O and goes over some of the pitfalls.

Related

How does lazy evaluation interplay with MVars?

Let's say I have multiple threads that are reading from a file and I want to make sure that only a single thread is reading from the file at any point in time.
One way to implement this is to use an mvar :: MVar () and ensure mutual exclusion as follows:
thread = do
...
_ <- takeMVar mvar
x <- readFile "somefile" -- critical section
putMVar mvar ()
...
-- do something that evaluates x.
The above should work fine in strict languages, but unless I'm missing something, I might run into problems with this approach in Haskell. In particular, since x is evaluated only after the thread exits the critical section, it seems to me that the file will only be read after the thread has executed putMVar, which defeats the point of using MVars in the first place, as multiple threads may read the file at the same time.
Is the problem that I'm describing real and, if so, how do I get around it?
Yes, it's real. You get around it by avoiding all the base functions that are implemented using unsafeInterleaveIO. I don't have a complete list, but that's at least readFile, getContents, hGetContents. IO actions that don't do lazy IO -- like hGet or hGetLine -- are fine.
If you must use lazy IO, then fully evaluate its results in an IO action inside the critical section, e.g. by combining rnf and evaluate.
Some other commentary on related things, but that aren't directly answers to this question:
Laziness and lazy IO are really separate concepts. They happen to share a name because humans are lazy at naming. Most IO actions do not involve lazy IO and do not run into this problem.
There is a related problem about stuffing unevaluated pure computations into your MVar and accidentally evaluating it on a different thread than you were expecting, but if you avoid lazy IO then evaluating on the wrong thread is merely a performance bug rather than an actual semantics bug.
readFile should be named unsafeReadFile because it's unsafe in the same way as unsafeInterleaveIO. If you stay away from functions that have, or should have, the unsafe prefix then you won't have this problem.
Haskell isn't a lazily evaluated language. It's language in which, as in mathematics, evaluation order doesn't matter (except that you mustn't spend an unbounded amount of time trying to evaluate a function's argument before evaluating the function body). Compilers are free to reorder computations for efficiency reasons, and GHC does, so programs compiled with GHC aren't lazily evaluated as a rule.
readFile (along with getContents and hGetContents) is one of a small number of standard Haskell functions without the unsafe prefix that violate Haskell's value semantics. GHC has to specially disable its optimizations when it encounters such functions because they make program transformations observable that aren't supposed to be observable.
These functions are convenient hacks that can make some toy programs easier to write. You shouldn't use them in threaded code, or, in my opinion, at all. I think they shouldn't even be used in introductory programming courses (which is probably what they were meant for) because they give beginners a totally wrong impression of how evaluation in Haskell is supposed to work.

Using main in a Haskell file

I've done a fair bit of programming in haskell using GHCI however our next assignment requires us to use just GHC to compile and test our code.
Because of how ghci works compared to ghc you apparently need to use a main function and the ghc looks for this function in your code.
My question is, if Haskell promotes type safety and no side-effects unless within an IO action, why does the main part of any haskell program have to be an IO action?
Forgive me if I'm not understanding something fundamental, I just couldn't find any resources which ultimately explains this.
If your main function is not an IO action, then all it can do is produce a result. Since Haskell is lazy, this (usually) means that it just produces a promise that the computation will be performed, but it won't compute it until it needs to be used. Since the usual way to ensure something is computed is to print that value out, or send it over a network, save it to disk, or use it for some other kind of IO, your result won't be computed and the program would simply exit. Imagine a fake program like
main :: Int
main = 1 + 1
Suppose you could compile and run this, what would you expect to happen? Nothing gets printed, nothing is asking for the result of main, all that Haskell can do with this is create the promise that 1 + 1 will be computed at some point then exit the program. Basically, you can't do anything interesting at the top level without IO, and since we want programs to do interesting things we need our top level to be an IO action.
Put simply, running a program is a side-effect. This is why the top-level function is an I/O action.
An ideal Haskell program is a large chunk of pure code with a thin I/O "skin" around it.
I think it's pretty straightforward - ultimately, you need every program to do IO at the top level - otherwise how would you know if it did anything at all, or - how useful would it be? So you must have either console I/O, network I/O, disk I/O, or something similar. I don't see how you could get around that.

Referential transparency and mmap in Haskell

I was hoping to use System.INotify and System.IO.MMap together in order to watch for file modifications and then quickly perform diffs for sending patches over a network. However, in the documentation for System.IO.MMap there's a couple of warnings about referential transparency:
The documentation states
It is only safe to mmap a file if you know you are the sole user. Otherwise referential transparency may be or may be not compromised. Sadly semantics differ much between operating systems.
The values that MMap returns are IO ByteString, surely when I use this value with putStr I'm expecting a different result each time? I assume that the author means that the value could change during an IO operation such as putStr and crash?
START-OF-EDIT: Come to think of it, I guess answer to this part of the question is somewhat obvious...
If the value changes any time after it is unboxed it would be problematic.
do
v <- mappedValue :: IO ByteString
putStr v
putStr v -- Expects the same value of v everywhere
END-OF-EDIT
Shouldn't it be possible to acquire some kind of lock on the mapped region or on the file?
Alternatively, would it be possible to write a function copy :: IO ByteString -> IO ByteString that takes a snapshot of the file in its current state in a safe way?
I think the author means that the value can change even inside a lifted function that can view it as a plain ByteString (no IO).
The meory mapped file is a region of memory. It doesn't make much sense to copy its content back and forth, for performance reasons (otherwise one could just do plain old stream-based I/O). So the ByteString you are getting is live.
If you want to have a snapshot, just use a stream-based I/O. That's what reading a file does: creates a file snapshot in the memory! I guess an alternative would be using the ForeignPtr interface which does not carry the referential transparency warning. I'm not familiar with ForeignPtrs so I cannot guarantee it will work, but it looks promising and I would investigate it.
You can also try calling map id on your ByteString but it is not guaranteed you will get a copy distinct from the original.
Mandatory file locking, especially on Linux, is a mess that is better avoided. Advisory file locking is OK, except nobody is using it, so it effectively does not exist.

Why does Haskell not have an I Monad (for input only, unlike the IO monad)?

Conceptually, it seems that a computation that performs output is very different from one that performs input only. The latter is, in one sense, much purer.
I, for one, would like to have a way to separate the input only parts of my programme from the ones that might actually write something out.
So, why is there no input only Monad?
Any reason why it wouldn't work to have an I monad (and an O Monad, which could be combined into the IO Monad)?
Edit: I mostly meant input as reading files, not interacting with the user. This is also my use case, where I can assume that input files do not change during the execution of the programme (otherwise, it's fine to get undefined behaviour).
I disagree with bdonlan's answer. It's true that neither input nor output are more "pure" but they are quite different. It's quite valid to critique IO as the single "sin bin" where all effects get crammed together, and it does make ensuring certain properties harder. For example, if you have many functions that you know only read from certain memory locations, and which could never cause those locations to be altered, it would be nice to know that you can reorder their execution. Or if you have a program that uses forkIO and MVars, it would be nice to know, based on its type, that it isn't also reading /etc/passwd.
Furthermore, one can compose monadic effects in a fashion besides just stacked transformers. You can't do this with all monads (just free monads), but for a case like this that's all you really need. The iospec package, for example, provides a pure specification of IO -- it doesn't seperate reading and writing, but it does seperate them from, e.g., STM, MVars, forkIO, soforth.
http://hackage.haskell.org/package/IOSpec
The key ideas for how you can combine the different monads cleanly are described in the Data Types a la Carte paper (great reading, very influential, can't recommend enough, etc.etc.).
The 'Input' side of the IO monad is just as much output as it is input. If you consume a line of input, the fact that you consumed that input is communicated to the outside, and also serves to be recorded as impure state (ie, you don't consume the same line again later); it's just as much an output operation as a putStrLn. Additionally, input operations must be ordered with respect to output operations; this again limits how much you can separate the two.
If you want a pure read-only monad, you should probably use the reader monad instead.
That said, you seem to be a bit confused about what combining monads can do. While you can indeed combine two monads (assuming one is a monad transformer) and get some kind of hybrid semantics, you have to be able to run the result. That is, even if you could define an IT (OT Identity) r, how do you run it? You have no root IO monad in this case, so main must be a pure function. Which would mean you'd have main = runIdentity . runOT . runIT $ .... Which is nonsense, since you're getting impure effects from a pure context.
In other words, the type of the IO monad has to be fixed. It can't be a user-selectable transformed type, because its type is nailed down into main. Sure, you could call it I (O Identity), but you don't gain anything; O (I Identity) would be a useless type, as would be I [] or O Maybe, because you'd never be able to run any of these.
Of course, if IO is left as the fundamental IO monad type, you could define routines like:
runI :: I Identity r -> IO r
This works, but again, you can't have anything underneath this I monad very easily, and you're not gaining much from this complexity. What would it even mean to have an Output monad transformed over a List base monad, anyway?
When you obtain input, you cause side-effects that changes both the state of the outside world (the input is consumed) and your program (the input is used). When you output, you cause side-effects that only change the state of the outside world (output is produced); the act of outputting itself does not change the state of your program. So you might actually say that O is more "pure" than I.
Except that output does actually change the execution state of your program (It won't repeat the same output operation over and over; it has to have some sort of state change in order to move on). It all depends on how you look at it. But it's so much easier to lump the dirtiness of input and output into the same monad. Any useful program will both input and output. You can categorize the operations you use into one or the other, but I'm not seeing a convincing reason to employ the type system for the task.
Either you're messing with the outside world or you're not.
Short answer: IO is not I/O.
Other folks have longer answers if you like.
I think the division between pure and impure code is somewhat arbitrary. It depends on where you put the barrier. Haskell's designers decided to clearly separate pure functional part of the language from the rest.
So we have IO monad which incorporates all the possible effects (as different, as disk reads/writes, networking, memory access). And language enforces a clear division by means of return type. And this induces a kind of thinking which divides everything in pure and impure.
If the information security is concerned, it would be quite naturally to separate reading and writing. But for haskell's initial goal, to be a standard lazy pure functional language, it was an overkill.

What makes Iteratees worth the complexity?

First, I understand the how of iteratees, well enough that I could probably write a simplistic and buggy implementation without referring back to any existing ones.
What I'd really like to know is why people seem to find them so fascinating, or under what circumstances their benefits justify their complexity. Comparing them to lazy I/O there is a very clear benefit, but that seems an awful lot like a straw man to me. I never felt comfortable about lazy I/O in the first place, and I avoid it except for the occasional hGetContents or readFile, mostly in very simple programs.
In real-world scenarios I generally use traditional I/O interfaces with control abstractions appropriate to the task. In that context I just don't see the benefit of iteratees, or to what task they are an appropriate control abstraction. Most of the time they seem more like unnecessary complexity or even a counterproductive inversion of control.
I've read a fair number of articles about them and sources that make use of them, but have not yet found a compelling example that actually made me think anything along the lines of "oh, yea, I'd have used them there too." Maybe I just haven't read the right ones. Or perhaps there is a yet-to-be-devised interface, simpler than any I've yet seen, that would make them feel less like a Swiss Army Chainsaw.
Am I just suffering from not-invented-here syndrome or is my unease well-founded? Or is it perhaps something else entirely?
As to why people find them so fascinating, I think because they're such a simple idea. The recent discussion on Haskell-cafe about a denotational semantics for iteratees devolved into a consensus that they're so simple they're barely worth describing. The phrase "little more than a glorified left-fold with a pause button" sticks out to me from that thread. People who like Haskell tend to be fond of simple, elegant structures, so the iteratee idea is likely very appealing.
For me, the chief benefits of iteratees are
Composability. Not only can iteratees be composed, but enumerators can too. This is very powerful.
Safe resource usage. Resources (memory and handles mostly) cannot escape their local scope. Compare to strict I/O, where it's easier to create space leaks by not cleaning up.
Efficient. Iteratees can be highly efficient; competitive with or better than both lazy I/O and strict I/O.
I have found that iteratees provide the greatest benefits when working with single logical data that comes from multiple sources. This is when the composability is most helpful, and resource management with strict I/O most annoying (e.g. nested allocas or brackets).
For an example, in a work-in-progress audio editor, a single logical chunk of sound data is a set of offsets into multiple audio files. I can process that single chunk of sound by doing something like this (from memory, but I think this is right):
enumSound :: MonadIO m => Sound -> Enumerator s m a
enumSound snd = foldr (>=>) enumEof . map enumFile $ sndFiles snd
This seems clear, concise, and elegant to me, much more so than the equivalent strict I/O. Iteratees are also powerful enough to incorporate any processing I want to do, including writing output, so I find this very nice. If I used lazy I/O I could get something as elegant, but the extra care to make sure resources are consumed and GC'd would outweigh the advantages IMO.
I also like that you need to explicitly retain data in iteratees, which avoids the notorious mean xs = sum xs / length xs space leak.
Of course, I don't use iteratees for everything. As an alternative I really like the with* idiom, but when you have multiple resources that need to be nested that gets complex very quickly.
Essentially, it's about doing IO in a functional style, correctly and efficiently. That's all, really.
Correct and efficient are easy enough using quasi-imperative style with strict IO. Functional style is easy with lazy IO, but it's technically cheating (using unsafeInterleaveIO under the hood) and can have issues with resource management and efficiency.
In very, very general terms, a lot of pure functional code follows a pattern of taking some data, recursively expanding it into smaller pieces, transforming the pieces in some fashion, then recombining it into a final result. The structure may be implicit (in the call graph of the program) or an explicit data structure being traversed.
But this falls apart when IO is involved. Say your initial data is a file handle, the "recursively expand" step is reading a line from it, and you can't read the entire file into memory at once. This forces the entire read-transform-recombine process to be done for each line before reading the next one, so instead of the clean "unfold, map, fold" structure they get mashed together into explicitly recursive monadic functions using strict IO.
Iteratees provide an alternative structure to solve the same problem. The "transform and recombine" steps are extracted and, instead of being functions, are changed into a data structure representing the current state of the computation. The "recursively expand" step is given the responsibility of obtaining the data and feeding it to an (otherwise passive) iteratee.
What benefits does this offer? Among other things:
Because an iteratee is a passive object that performs single steps of a computation, they can be easily composed in different ways--for instance, interleaving two iteratees instead of running them sequentially.
The interface between iteratees and enumerators is pure, just a stream of values being processed, so a pure function can be freely spliced in between them.
Data sources and computations are oblivious to each other's internal workings, decoupling input and resource management from processing and output.
The end result is that a program can have a high-level structure much closer to what a pure functional version would look like, with many of the same benefits to compositionality, while simultaneously having efficiency comparable to the more imperative, strict IO version.
As for being "worth the complexity"? Well, that's the thing--they're really not that complex, just a bit new and unfamiliar. The idea's been floating around for only, what, a couple years? Give it some time for things to shake out as people use iteratee-based IO in larger projects (e.g., with things like Snap), and for more examples/tutorials to appear. It's likely that, in hindsight, the current implementations will seem very rough around the edges.
Somewhat related: You may want to read this discussion about functional-style IO. Iteratees aren't mentioned all that much, but the central issue is very similar. In particular this solution, which is both very elegant and goes even further than iteratees in abstracting incremental IO.
under what circumstances their benefits justify their complexity
Every language has strict (classical) IO, where all resources are managed by the user. Haskell also provides ubiquitous lazy IO, where all resource management is delegated to the system.
However, that can create problems, as the scope of resources is dependent on runtime demand properties.
Iteratees strike a third way:
High level abstractions, like lazy IO.
Explicit, lexical scoping of resources, like strict IO.
It is justified when you have complex IO processing tasks, but very tight bounds on resource use. An example is a web server.
Indeed, Snap is built around iteratee IO on top of epoll.

Resources