How come my IO runs so slowly in Erlang? - haskell

I'm reading 512^2 whitespace delimited doubles written in a text file to my Erlang program by piping them to stdin.
In Erlang this takes 2m25s, in an equivalent Haskell program it takes 3s, so I must be going against the Erlang way of doing it in some way.
Am I using Erlang's IO primitives in a stupid way, or is there something else wrong with my program?
Note that I don't care about the order of the values in the resulting list, so no reverse operation.
Erlang:
-module(iotest).
-import(io).
-export([main/0]).
main() ->
Values = read(),
io:write(Values).
read() -> read([]).
read(Acc) ->
case io:fread("", "~f") of
{ok, Value} -> read([Value | Acc]);
eof -> Acc
end.
Haskell:
module IOTest (
main
) where
main :: IO ()
main = do
text <- getContents
let values = map read (words text) :: [Double]
putStrLn $ show values
return ()
Thanks very much for any help.

No, you are not using Erlang IO in stupid way. It's problem with Erlang IO which is not well known to be fast. Erlang is widely used for writing servers so socked oriented IO is excellent tuned. Block oriented file IO is not so bad, but using io module for working with stdin doesn't work well. Erlang is not widely used for this kind of work. If you need this kind of operations you should write your own specialized input routine. You have two options there:
use io for reading from file in raw and binary mode and then split input using binary module and then use list_to_float/1 for conversion.
use specialized port oriented stdin reading routine (as you can see for example in http://shootout.alioth.debian.org/u64q/program.php?test=regexdna&lang=hipe&id=7 note read/0 function and -noshell -noinput parameters for vm invocation) and then continue as in first option.
In mine opinion (and from mine previous experience) biggest impact in your case comes from using scan-like input routine for float decoding seconded by slow (repeated) io invocation, but it would need some nontrivial profiling to prove it.

Related

What is pipes/conduit trying to solve

I have seen people recommending pipes/conduit library for various lazy IO related tasks. What problem do these libraries solve exactly?
Also, when I try to use some hackage related libraries, it is highly likely there are three different versions. Example:
attoparsec
pipes-attoparsec
attoparsec-conduit
This confuses me. For my parsing tasks should I use attoparsec or pipes-attoparsec/attoparsec-conduit? What benefit do the pipes/conduit version give me as compared to the plain vanilla attoparsec?
Lazy IO
Lazy IO works like this
readFile :: FilePath -> IO ByteString
where ByteString is guaranteed to only be read chunk-by-chunk. To do so we could (almost) write
-- given `readChunk` which reads a chunk beginning at n
readChunk :: FilePath -> Int -> IO (Int, ByteString)
readFile fp = readChunks 0 where
readChunks n = do
(n', chunk) <- readChunk fp n
chunks <- readChunks n'
return (chunk <> chunks)
but here we note that the IO action readChunks n' is performed prior to returning even the partial result available as chunk. This means we're not lazy at all. To combat this we use unsafeInterleaveIO
readFile fp = readChunks 0 where
readChunks n = do
(n', chunk) <- readChunk fp n
chunks <- unsafeInterleaveIO (readChunks n')
return (chunk <> chunks)
which causes readChunks n' to return immediately, thunking an IO action to be performed only when that thunk is forced.
That's the dangerous part: by using unsafeInterleaveIO we've delayed a bunch of IO actions to non-deterministic points in the future that depend upon how we consume our chunks of ByteString.
Fixing the problem with coroutines
What we'd like to do is slide a chunk processing step in between the call to readChunk and the recursion on readChunks.
readFileCo :: Monoid a => FilePath -> (ByteString -> IO a) -> IO a
readFileCo fp action = readChunks 0 where
readChunks n = do
(n', chunk) <- readChunk fp n
a <- action chunk
as <- readChunks n'
return (a <> as)
Now we've got the chance to perform arbitrary IO actions after each small chunk is loaded. This lets us do much more work incrementally without completely loading the ByteString into memory. Unfortunately, it's not terrifically compositional--we need to build our consumption action and pass it to our ByteString producer in order for it to run.
Pipes-based IO
This is essentially what pipes solves--it allows us to compose effectful co-routines with ease. For instance, we now write our file reader as a Producer which can be thought of as "streaming" the chunks of the file when its effect gets run finally.
produceFile :: FilePath -> Producer ByteString IO ()
produceFile fp = produce 0 where
produce n = do
(n', chunk) <- liftIO (readChunk fp n)
yield chunk
produce n'
Note the similarities between this code and readFileCo above—we simply replace the call to the coroutine action with yielding the chunk we've produced so far. This call to yield builds a Producer type instead of a raw IO action which we can compose with other Pipes types in order to build a nice consumption pipeline called an Effect IO ().
All of this pipe building gets done statically without actually invoking any of the IO actions. This is how pipes lets you write your coroutines more easily. All of the effects get triggered at once when we call runEffect in our main IO action.
runEffect :: Effect IO () -> IO ()
Attoparsec
So why would you want to plug attoparsec into pipes? Well, attoparsec is optimized for lazy parsing. If you are producing the chunks fed to an attoparsec parser in an effectful way then you'll be at an impasse. You could
Use strict IO and load the entire string into memory only to consume it lazily with your parser. This is simple, predictable, but inefficient.
Use lazy IO and lose the ability to reason about when your production IO effects will actually get run causing possible resource leaks or closed handle exceptions according to the consumption schedule of your parsed items. This is more efficient than (1) but can easily become unpredictable; or,
Use pipes (or conduit) to build up a system of coroutines which include your lazy attoparsec parser allowing it to operate on as little input as it needs while producing parsed values as lazily as possible across the entire stream.
If you want to use attoparsec, use attoparsec
For my parsing tasks should I use attoparsec or pipes-attoparsec/attoparsec-conduit?
Both pipes-attoparsec and attoparsec-conduit transform a given attoparsec Parser into a sink/conduit or pipe. Therefore you have to use attoparsec either way.
What benefit do the pipes/conduit version give me as compared to the plain vanilla attoparsec?
They work with pipes and conduit, where the vanilla one won't (at least not out-of-the-box).
If you don't use conduit or pipes, and you're satisfied with the current performance of your lazy IO, there's no need to change your current flow, especially if you're not writing a big application or process large files. You can simply use attoparsec.
However, that assumes that you know the drawbacks of lazy IO.
What's the matter with lazy IO? (Problem study withFile)
Lets not forget your first question:
What problem do these libraries solve exactly ?
They solve the streaming data problem (see 1 and 3), that occurs within functional languages with lazy IO. Lazy IO sometimes gives you not what you want (see example below), and sometimes it's hard to determine the actual system resources needed by a specific lazy operation (is the data read/written in chunks/bytes/buffered/onclose/onopen…).
Example for over-laziness
import System.IO
main = withFile "myfile" ReadMode hGetContents
>>= return . (take 5)
>>= putStrLn
This won't print anything, since the evaluation of the data happens in putStrLn, but the handle has been closed already at this point.
Fixing fire with poisonous acid
While the following snippet fixes this, it has another nasty feature:
main = withFile "myfile" ReadMode $ \handle ->
hGetContents handle
>>= return . (take 5)
>>= putStrLn
In this case hGetContents will read all of the file, something you didn't expect at first. If you just want to check the magic bytes of a file which could be several GB in size, this is not the way to go.
Using withFile correctly
The solution is, obviously, to take the things in the withFile context:
main = withFile "myfile" ReadMode $ \handle ->
fmap (take 5) (hGetContents handle)
>>= putStrLn
This is by the way, also the solution mentioned by the author of pipes:
This [..] answers a question people sometimes ask me about pipes, which I will paraphase here:
If resource management is not a core focus of pipes, why should I use pipes instead of lazy IO?
Many people who ask this question discovered stream programming through Oleg, who framed the lazy IO problem in terms of resource management. However, I never found this argument compelling in isolation; you can solve most resource management issues simply by separating resource acquisition from the lazy IO, like this: [see last example above]
Which brings us back to my previous statement:
You can simply use attoparsec [...][with lazy IO, assuming] that you know the drawbacks of lazy IO.
References
Iteratee I/O, which explains the example better and provides a better overview
Gabriel Gonzalez (maintainer/author of pipes): Reasoning about stream programming
Michael Snoyman (maintainer/author of conduit): Conduit versus Enumerator
Here's a great podcast with authors of both libraries:
http://www.haskellcast.com/episode/006-gabriel-gonzalez-and-michael-snoyman-on-pipes-and-conduit/
It'll answer most of your questions.
In short, both of those libraries approach the problem of streaming, which is very important when dealing with IO. In essence they manage transferring of data in chunks,
thus allowing you to e.g. transfer a 1GB file cosuming just 64KB of RAM on both the server and the client. Without streaming you would have had to allocate as much memory on both ends.
An older alternative to those libraries is lazy IO, but it is filled with issues and makes applications error-prone. Those issues are discussed in the podcast.
Concerning which one of those libraries to use, it's more of a matter of taste. I prefer "pipes". The detailed differences are discussed in the podcast too.

Incorporating Conduit to ordinary functions

I wrote a simple program in which I read a big XML file and do some
processing on the contents of the file and then save the processed
data in new file.
The original main function follows something like this:
main = do
content <- B.readFile "/home/sibi/github/data/chennai.osm"
let tags = removeUnwanted $ parseTags content
hospitals = toHospital $ extractHospitalNode tags
BL.writeFile "osmHospitals.json" (encode hospitals)
But this code eats up the full memory and takes a huge time to finish. So, I decided
to use conduit library for making the program run in constant memory.
But after reading the conduit tutorial, I still haven't got the idea
how to make the above program use the features of conduit library.
I figured out that I can use conduit's sourceFile which can stream the
content of the file. But then how to apply the function parseTags (which is a function from the TagSoup library) and other simple functions
to the streamed content now ?
Edit: The entire code is here
There's a huge disconnect between the methodology of parseTags and the methodology of conduit and pipes: parseTags assumes it can access the next chunk of data purely, while pipes/conduit let you handle situations where that's impossible, such as streaming from a file. In order to mix parsing into pipes/conduit you must have a way to mix consuming a parse into steps which pull new chunks of data.
(I'll use pipes in the sequel because I'm more familiar with them, but the idea is transferable.)
We can see this disconnect in the types, though I'll begin with a slightly restricted version.
parseTags :: Lazy.ByteString -> [Tag Lazy.ByteString]
We can think of Lazy.ByteString as streaming apparatus all by itself, it is, after all, essentially just
type LazyByteString = [Strict.ByteString]
such that if we were generating the Lazy.ByteString ourselves then we could rely on the laziness of lists to ensure that we don't generate more than what parseTags needed in order to proceed (I'll assume, without looking, that parseTags is written so that it could incrementally parse a streaming structure like that).
sillyGen :: LazyByteString
sillyGen = gen 10 where
gen 0 = []
gen n = "<tag> </tag>" : gen (n-1)
Now the problem here is that the streaming behavior of a list depends crucially upon being able to generate the tail of the list purely. In the discussion so far there hasn't been any mention of a monad at all. Unfortunately, that cannot be true with a string being streamed from a file---we need to somehow integrate an IO action between each streamed chunk where we consider whether or not we've reached EOF and close the file as necessary.
This is exactly the realm of pipes and conduit, so let's look at what that do to solve that issue.
-- from pipes-bytestring
fromHandle :: Handle -> Producer' Strict.ByteString IO ()
We can think of fromHandle as being the "monadically-interwoven" equivalent to
Lazy.hGetContents :: Handle -> IO Lazy.ByteString
The types suggest a crucial difference between these two operations--hGetContents can be executed in exactly one IO action while when we pass a Handle to pipes-bytestring's fromHandle it returns a type which is parameterized over IO but cannot be simply freed from it. This is exactly indicative of hGetContents using lazy IO (which can be unpredictable due to the use of unsafeInterleaveIO) while fromHandle uses deterministic streaming.
We can write a type similar to Producer Strict.ByteString IO () as
data IOStreamBS = IOSBS { stepStream :: IO (Strict.ByteString, Either IOStreamBS ()) }
In other words we can think of Producer Strict.ByteString IO () as not much more than an IO action which produces exactly the next chunk of the file and (possibly) a new action to get the next chunk. This is how pipes and conduit provide deterministic streaming.
But it also means that you cannot escape from the IO in one fell swoop—you have to carry it around.
We might thus want to adjust parseTags, which is capable of some generalization over its input, to just accept Producer Strict.ByteString IO () as a StringLike type
parseTags :: StringLike str => str -> [Tag str]
Let's assume for argument that we've instantiated StringLike (Producer Strict.ByteString IO ()). That would mean that applying parseTags to our producer would provide us with a list of Tag (Producer Strict.ByteString IO ()).
type DetStream = Producer Strict.ByteString IO ()
parseTags :: DetStream -> [Tag DetStream]
For this to happen we would have had to peek into our Producer and cut it up into chunks without executing anything in the IO monad. By this point it should be clear that such a function is impossible---we couldn't even get the first chunk from the file without doing something in IO.
To remedy this situation, systems like pipes-parse and pipes-group have arisen which replace the function signature with something more like
parseTagsGrouped :: Producer Strict.ByteString IO ()
-> FreeT (Producer (Tag Strict.ByteString) IO) IO ()
which is scary looking but serves an identical purpose to parseTags except that it generalizes the list to a structure which allows us to execute arbitrary IO actions between each element. This kind of transformation, as the type shows, can be done purely and thus allows us to assemble our streaming machinery using pure combinations and only incur an IO step when we execute it at the end (using runEffect).
So, all said and done, it's probably not going to be possible to use pipes or conduit to stream to parseTags---it simply assumes that certain transformations can be done purely, pushing all the IO to one point in time, while pipes/conduit are basically mechanisms for spreading IO throughout a computation without too much mental overhead.
If you're stuck using parseTags, however, you can get by using lazy IO as long as you're careful. Try a few variations with hGetContents from Data.ByteString.Lazy. The primary problem will be that the file may close prior to the unsafeInterleaveIO'd operations actually getting around to reading it. You'll thus need to manage strictness very carefully.
Essentially that's the big difference between pipes/conduit and lazy IO. When using lazy IO, all of the "read a chunk" operations are made invisible and implicitly controlled by Haskell laziness. This is dynamic, implicit, and tough to observe or predict. In pipes/conduit all of this motion is made extraordinarily explicit and static, but it's up to you to manage the complexity.
What if you try System.IO, read the file line by line and process it(or read parts of the xml file)?

What is the difference between pure and impure in haskell?

What is the difference between pure and impure in haskell?
When doing IO in haskell, what does it mean to keep the pure and impure items separate?
Essentially you want to keep as little code as possible in the "impure section". Code written in the IO monad is forever tainted with insecurity. A function with the signature IO Int will return an integer in the IO monad, but it could on top of that send nuclear missiles to the moon. We have no way of knowing without studying every line of the code.
For example, let's say you want to write a program that takes a string and appends ", dude" to it.
main = do
line <- getLine
putStrLn $ line ++ ", dude"
Some of the parts of the code are required to be in the IO monad, because they have side effects. This includes getLine and putStrLn. However, putting the two strings together does not.
main = do
line <- getLine
putStrLn $ addDude line
addDude input = input ++ ", dude"
The signature of addDude shows that it is pure: String -> String. No IO here. This means we can assume that addDude will behave at least in that way. It will take one string and return one string. It is impossible for it to have side effects. It is impossible for it to blow up the moon.
Purity simply means causing no side-effects (reading from a disk, moving a robot arm, etc.)
Separating pure from impure functions means that you can know more about what your code is going to do. For example, when you say 1 + 2, you know for sure by its type (Int -> Int -> Int) that the only thing it does is take two numbers and produce a third number. If its type were Int -> Int -> IO Int, it might move a robot arm each time it adds the two numbers.
A good starting place for the basics of Haskell can be found here:
http://learnyouahaskell.com/introduction#so-whats-haskell
Everything in Haskell is pure. What you are reading about is likely about code inside the IO mondad vs. outside. Once you put something into the IO monad, it can never "escape" -- you have to stay in the IO monad. Therefore, the IO monad has a tendency to "invade" your code -- if you have a thing that returns IO, then any code that calls that also has to return IO, and so on. Therefore it is best to use the IO monad only where it is necessary, at as top level of the program as possible, and separate out any parts of the computation that are pure into pure functions.

What other ways can state be handled in a pure functional language besides with Monads?

So I started to wrap my head around Monads (used in Haskell). I'm curious what other ways IO or state can be handled in a pure functional language (both in theory or reality). For example, there is a logical language called "mercury" that uses "effect-typing". In a program such as haskell, how would effect-typing work? How does other systems work?
There are several different questions involved here.
First, IO and State are very different things. State is easy to do
yourself: Just pass an extra argument to every function, and return an extra
result, and you have a "stateful function"; for example, turn a -> b into
a -> s -> (b,s).
There's no magic involved here: Control.Monad.State provides a wrapper that
makes working with "state actions" of the form s -> (a,s) convenient, as well
as a bunch of helper functions, but that's it.
I/O, by its nature, has to have some magic in its implementation. But there are
a lot of ways of expressing I/O in Haskell that don't involve the word "monad".
If we had an IO-free subset of Haskell as-is, and we wanted to invent IO from
scratch, without knowing anything about monads, there are many things we might
do.
For example, if all we want to do is print to stdout, we might say:
type PrintOnlyIO = String
main :: PrintOnlyIO
main = "Hello world!"
And then have an RTS (runtime system) which evaluates the string and prints it.
This lets us write any Haskell program whose I/O consists entirely of printing
to stdout.
This isn't very useful, however, because we want interactivity! So let's invent
a new type of IO which allows for it. The simplest thing that comes to mind is
type InteractIO = String -> String
main :: InteractIO
main = map toUpper
This approach to IO lets us write any code which reads from stdin and writes to
stdout (the Prelude comes with a function interact :: InteractIO -> IO ()
which does this, by the way).
This is much better, since it lets us write interactive programs. But it's
still very limited compared to all the IO we want to do, and also quite
error-prone (if we accidentally try to read too far into stdin, the program
will just block until the user types more in).
We want to be able to do more than read stdin and write stdout. Here's how
early versions of Haskell did I/O, approximately:
data Request = PutStrLn String | GetLine | Exit | ...
data Response = Success | Str String | ...
type DialogueIO = [Response] -> [Request]
main :: DialogueIO
main resps1 =
PutStrLn "what's your name?"
: GetLine
: case resps1 of
Success : Str name : resps2 ->
PutStrLn ("hi " ++ name ++ "!")
: Exit
When we write main, we get a lazy list argument and return a lazy list as a
result. The lazy list we return has values like PutStrLn s and GetLine;
after we yield a (request) value, we can examine the next element of the
(response) list, and the RTS will arrange for it to be the response to our
request.
There are ways to make working with this mechanism nicer, but as you can
imagine, the approach gets pretty awkward pretty quickly. Also, it's
error-prone in the same way as the previous one.
Here's another approach which is much less error-prone, and conceptually very
close to how Haskell IO actually behaves:
data ContIO = Exit | PutStrLn String ContIO | GetLine (String -> ContIO) | ...
main :: ContIO
main =
PutStrLn "what's your name?" $
GetLine $ \name ->
PutStrLn ("hi " ++ name ++ "!") $
Exit
The key is that instead of taking a "lazy list" of responses as one big
argument at he beginning of main, we make individual requests that accept one
argument at a time.
Our program is now just a regular data type -- a lot like a linked list, except
you can't just traverse it normally: When the RTS interprets main, sometimes
it encounters a value like GetLine which holds a function; then it has to get
a string from stdin using RTS magic, and pass that string to the function,
before it can continue. Exercise: Write interpret :: ContIO -> IO ().
Note that none of these implementations involve "world-passing".
"world-passing" isn't really how I/O works in Haskell. The actual
implementation of the IO type in GHC involves an internal type called
RealWorld, but that's only an implementation detail.
Actual Haskell IO adds a type parameter so we can write actions that
"produce" arbitrary values -- so it looks more like data IO a = Done a |
PutStr String (IO a) | GetLine (String -> IO a) | .... That gives us more
flexibility, because we can create "IO actions" that produce arbitrary
values.
(As Russell O'Connor points out,
this type is just a free monad. We can write a Monad instance for it easily.)
Where do monads come into it, then? It turns out that we don't need Monad for
I/O, and we don't need Monad for state, so why do we need it at all? The
answer is that we don't. There's nothing magical about the type class Monad.
However, when we work with IO and State (and lists and functions and
Maybe and parsers and continuation-passing style and ...) for long enough, we
eventually figure out that they behave pretty similarly in some ways. We might
write a function that prints every string in a list, and a function that runs
every stateful computation in a list and threads the state through, and they'll
look very similar to each other.
Since we don't like writing a lot of similar-looking code, we want a way to
abstract it; Monad turns out to be a great abstraction, because it lets us
abstract many types that seem very different, but still provide a lot of useful
functionality (including everything in Control.Monad).
Given bindIO :: IO a -> (a -> IO b) -> IO b and returnIO :: a -> IO a, we
could write any IO program in Haskell without ever thinking about monads. But
we'd probably end up replicating a lot of the functions in Control.Monad,
like mapM and forever and when and (>=>).
By implementing the common Monad API, we get to use the exact same code for
working with IO actions as we do with parsers and lists. That's really the only
reason we have the Monad class -- to capture the similarities between
different types.
Another major approach is uniqueness typing, as in Clean. The short story is that handles to state (including the real world) can only be used once, and functions that access mutable state return a new handle. This means that an output of the first call is an input of a second, forcing the sequential evaluation.
Effect typing is used in the Disciple Compiler for Haskell, but to the best of my knowledge it would take considerable compiler work to enable it in, say, GHC. I shall leave discussion of the details to those better-informed than myself.
Well, first what is state? It can manifest as a mutable variable, which you don't have in Haskell. You only have memory references (IORef, MVar, Ptr, etc.) and IO/ST actions to act on them.
However, state itself can be pure as well. To acknowledge that review the 'Stream' type:
data Stream a = Stream a (Stream a)
This is a stream of values. However an alternative way to interpret this type is a changing value:
stepStream :: Stream a -> (a, Stream a)
stepStream (Stream x xs) = (x, xs)
This gets interesting when you allow two streams to communicate. You then get the automaton category Auto:
newtype Auto a b = Auto (a -> (b, Auto a b))
This is really like Stream, except that now at every instant the stream gets some input value of type a. This forms a category, so one instant of a stream can get its value from the same instant of another stream.
Again a different interpretation of this: You have two computations that change over time and you allow them to communicate. So every computation has local state. Here is a type that is isomorphic to Auto:
data LS a b =
forall s.
LS s ((a, s) -> (b, s))
Take a look at A History of Haskell: Being Lazy With Class. It describes two different approaches to doing I/O in Haskell, before monads were invented: continuations and streams.
There is an approach called Functional Reactive Programming that represents time-varying values and/or event streams as a first-class abstraction. A recent example that comes to my mind is Elm (it is written in Haskell and has a syntax similar to Haskell).
I'm curious - what other ways I/O or state can be handled in a pure functional language (both in theory or reality)?
I'll just add to what's already been mentioned here (note: some of these approaches don't seem to have one, so there are a few "improvised names").
Approaches with freely-available descriptions or implementations:
"Orthogonal directives" - see An alternative approach to I/O by Maarten Fokkinga and Jan Kuper.
Pseudodata - see Nondeterminism with Referential Transparency in Functional Programming Languages by F. Warren Burton. The approach is used by Dave Harrison to implement clocks in his thesis
Functional Real-Time Programming: The Language Ruth And Its Semantics, and name supplies in the functional pearl On generating unique names by Lennart Augustsson, Mikael Rittri and Dan Synek; there are also a few library implementations in Hackage.
Witnesses - see Witnessing Side Effects by Tachio Terauchi and Alex Aiken.
Observers - see Assignments for Applicative Languages by Vipin Swarup, Uday S. Reddy and Evan Ireland.
Other approaches - references only:
System tokens:
L. Augustsson. Functional I/O Using System Tokens. PMG Memo 72, Dept Computer Science, Chalmers University of Technology, S-412 96 Göteborg, 1989.
"Effect trees":
Rebelsky S.A. (1992) I/O trees and interactive lazy functional programming. In: Bruynooghe M., Wirsing M. (eds) Programming Language Implementation and Logic Programming. PLILP 1992. Lecture Notes in Computer Science, vol 631. Springer, Berlin, Heidelberg.

"Lazy IO" in Haskell?

I'm trying a little experiment in haskell, wondering if it is possible to exploit laziness to process IO. I'd like to write a function that takes a String (a list of Chars) and produces a string, lazily. I would like then to be abily to lazily feed it characters from IO, so each character would be processed as soon as it was available, and the output would be produced as the characters necessary became available. However, I'm not quite sure if/how I can produce a lazy list of characters from input inside the IO monad.
Regular String IO in Haskell is lazy. So your example should just work out of the box.
Here's an example, using the 'interact' function, which applies a function to a lazy stream of characters:
interact :: (String -> String) -> IO ()
Let's filter out the letter 'e' from the input stream, lazily (i.e. run in constant space):
main = interact $ filter (/= 'e')
You could also use getContents and putStr if you like. They're all lazy.
Running it to filter the letter 'e' from the dictionary:
$ ghc -O2 --make A.hs
$ ./A +RTS -s < /usr/share/dict/words
...
2 MB total memory in use (0 MB lost due to fragmentation)
...
so we see that it ran in a constant 2M footprint.
The simplest method of doing lazy IO involves functions such as interact, readFile, hGetContents, and such, as dons says; there's a more extended discussion of these in the book Real World Haskell that you might find useful. If memory serves me, all such functions are eventually implemented using the unsafeInterleaveIO that ephemient mentions, so you can also build your own functions that way if you want.
On the other hand, it might be wise to note that unsafeInterleaveIO is exactly what it says on the tin: unsafe IO. Using it--or functions based on it--breaks purity and referential transparency. This allows apparently pure functions (that is, that do not return an IO action) to effect the outside world when evaluated, produce different results from the same arguments, and all those other unpleasant things. In practice, most sensible ways of using unsafeInterleaveIO won't cause problems, and simple mistakes will usually result in obvious and easily diagnosed bugs, but you've lost some nice guarantees.
There are alternatives, of course; you can find assorted libraries on Hackage that provide restricted, safer lazy IO or conceptually different approaches. However, given that problems arise only rarely in practical use, I think most people are inclined to stick with the built-in, technically unsafe functions.
unsafeInterleaveIO :: IO a -> IO a
unsafeInterleaveIO allos IO computation to be deferred lazily. When passed a value of type IO a, the IO will only be performed when the value of a is demanded. This is used to implement lazy file reading, see System.IO.hGetContents.
For example, main = getContents >>= return . map Data.Char.toUpper >>= putStr is lazy; as you feed characters to stdin, you will get characters on stdout.
(This is the same as writing main = interact $ map Data.Char.toUpper, as in dons's answer.)

Resources