I have seen people recommending pipes/conduit library for various lazy IO related tasks. What problem do these libraries solve exactly?
Also, when I try to use some hackage related libraries, it is highly likely there are three different versions. Example:
attoparsec
pipes-attoparsec
attoparsec-conduit
This confuses me. For my parsing tasks should I use attoparsec or pipes-attoparsec/attoparsec-conduit? What benefit do the pipes/conduit version give me as compared to the plain vanilla attoparsec?
Lazy IO
Lazy IO works like this
readFile :: FilePath -> IO ByteString
where ByteString is guaranteed to only be read chunk-by-chunk. To do so we could (almost) write
-- given `readChunk` which reads a chunk beginning at n
readChunk :: FilePath -> Int -> IO (Int, ByteString)
readFile fp = readChunks 0 where
readChunks n = do
(n', chunk) <- readChunk fp n
chunks <- readChunks n'
return (chunk <> chunks)
but here we note that the IO action readChunks n' is performed prior to returning even the partial result available as chunk. This means we're not lazy at all. To combat this we use unsafeInterleaveIO
readFile fp = readChunks 0 where
readChunks n = do
(n', chunk) <- readChunk fp n
chunks <- unsafeInterleaveIO (readChunks n')
return (chunk <> chunks)
which causes readChunks n' to return immediately, thunking an IO action to be performed only when that thunk is forced.
That's the dangerous part: by using unsafeInterleaveIO we've delayed a bunch of IO actions to non-deterministic points in the future that depend upon how we consume our chunks of ByteString.
Fixing the problem with coroutines
What we'd like to do is slide a chunk processing step in between the call to readChunk and the recursion on readChunks.
readFileCo :: Monoid a => FilePath -> (ByteString -> IO a) -> IO a
readFileCo fp action = readChunks 0 where
readChunks n = do
(n', chunk) <- readChunk fp n
a <- action chunk
as <- readChunks n'
return (a <> as)
Now we've got the chance to perform arbitrary IO actions after each small chunk is loaded. This lets us do much more work incrementally without completely loading the ByteString into memory. Unfortunately, it's not terrifically compositional--we need to build our consumption action and pass it to our ByteString producer in order for it to run.
Pipes-based IO
This is essentially what pipes solves--it allows us to compose effectful co-routines with ease. For instance, we now write our file reader as a Producer which can be thought of as "streaming" the chunks of the file when its effect gets run finally.
produceFile :: FilePath -> Producer ByteString IO ()
produceFile fp = produce 0 where
produce n = do
(n', chunk) <- liftIO (readChunk fp n)
yield chunk
produce n'
Note the similarities between this code and readFileCo above—we simply replace the call to the coroutine action with yielding the chunk we've produced so far. This call to yield builds a Producer type instead of a raw IO action which we can compose with other Pipes types in order to build a nice consumption pipeline called an Effect IO ().
All of this pipe building gets done statically without actually invoking any of the IO actions. This is how pipes lets you write your coroutines more easily. All of the effects get triggered at once when we call runEffect in our main IO action.
runEffect :: Effect IO () -> IO ()
Attoparsec
So why would you want to plug attoparsec into pipes? Well, attoparsec is optimized for lazy parsing. If you are producing the chunks fed to an attoparsec parser in an effectful way then you'll be at an impasse. You could
Use strict IO and load the entire string into memory only to consume it lazily with your parser. This is simple, predictable, but inefficient.
Use lazy IO and lose the ability to reason about when your production IO effects will actually get run causing possible resource leaks or closed handle exceptions according to the consumption schedule of your parsed items. This is more efficient than (1) but can easily become unpredictable; or,
Use pipes (or conduit) to build up a system of coroutines which include your lazy attoparsec parser allowing it to operate on as little input as it needs while producing parsed values as lazily as possible across the entire stream.
If you want to use attoparsec, use attoparsec
For my parsing tasks should I use attoparsec or pipes-attoparsec/attoparsec-conduit?
Both pipes-attoparsec and attoparsec-conduit transform a given attoparsec Parser into a sink/conduit or pipe. Therefore you have to use attoparsec either way.
What benefit do the pipes/conduit version give me as compared to the plain vanilla attoparsec?
They work with pipes and conduit, where the vanilla one won't (at least not out-of-the-box).
If you don't use conduit or pipes, and you're satisfied with the current performance of your lazy IO, there's no need to change your current flow, especially if you're not writing a big application or process large files. You can simply use attoparsec.
However, that assumes that you know the drawbacks of lazy IO.
What's the matter with lazy IO? (Problem study withFile)
Lets not forget your first question:
What problem do these libraries solve exactly ?
They solve the streaming data problem (see 1 and 3), that occurs within functional languages with lazy IO. Lazy IO sometimes gives you not what you want (see example below), and sometimes it's hard to determine the actual system resources needed by a specific lazy operation (is the data read/written in chunks/bytes/buffered/onclose/onopen…).
Example for over-laziness
import System.IO
main = withFile "myfile" ReadMode hGetContents
>>= return . (take 5)
>>= putStrLn
This won't print anything, since the evaluation of the data happens in putStrLn, but the handle has been closed already at this point.
Fixing fire with poisonous acid
While the following snippet fixes this, it has another nasty feature:
main = withFile "myfile" ReadMode $ \handle ->
hGetContents handle
>>= return . (take 5)
>>= putStrLn
In this case hGetContents will read all of the file, something you didn't expect at first. If you just want to check the magic bytes of a file which could be several GB in size, this is not the way to go.
Using withFile correctly
The solution is, obviously, to take the things in the withFile context:
main = withFile "myfile" ReadMode $ \handle ->
fmap (take 5) (hGetContents handle)
>>= putStrLn
This is by the way, also the solution mentioned by the author of pipes:
This [..] answers a question people sometimes ask me about pipes, which I will paraphase here:
If resource management is not a core focus of pipes, why should I use pipes instead of lazy IO?
Many people who ask this question discovered stream programming through Oleg, who framed the lazy IO problem in terms of resource management. However, I never found this argument compelling in isolation; you can solve most resource management issues simply by separating resource acquisition from the lazy IO, like this: [see last example above]
Which brings us back to my previous statement:
You can simply use attoparsec [...][with lazy IO, assuming] that you know the drawbacks of lazy IO.
References
Iteratee I/O, which explains the example better and provides a better overview
Gabriel Gonzalez (maintainer/author of pipes): Reasoning about stream programming
Michael Snoyman (maintainer/author of conduit): Conduit versus Enumerator
Here's a great podcast with authors of both libraries:
http://www.haskellcast.com/episode/006-gabriel-gonzalez-and-michael-snoyman-on-pipes-and-conduit/
It'll answer most of your questions.
In short, both of those libraries approach the problem of streaming, which is very important when dealing with IO. In essence they manage transferring of data in chunks,
thus allowing you to e.g. transfer a 1GB file cosuming just 64KB of RAM on both the server and the client. Without streaming you would have had to allocate as much memory on both ends.
An older alternative to those libraries is lazy IO, but it is filled with issues and makes applications error-prone. Those issues are discussed in the podcast.
Concerning which one of those libraries to use, it's more of a matter of taste. I prefer "pipes". The detailed differences are discussed in the podcast too.
Related
I wrote a simple program in which I read a big XML file and do some
processing on the contents of the file and then save the processed
data in new file.
The original main function follows something like this:
main = do
content <- B.readFile "/home/sibi/github/data/chennai.osm"
let tags = removeUnwanted $ parseTags content
hospitals = toHospital $ extractHospitalNode tags
BL.writeFile "osmHospitals.json" (encode hospitals)
But this code eats up the full memory and takes a huge time to finish. So, I decided
to use conduit library for making the program run in constant memory.
But after reading the conduit tutorial, I still haven't got the idea
how to make the above program use the features of conduit library.
I figured out that I can use conduit's sourceFile which can stream the
content of the file. But then how to apply the function parseTags (which is a function from the TagSoup library) and other simple functions
to the streamed content now ?
Edit: The entire code is here
There's a huge disconnect between the methodology of parseTags and the methodology of conduit and pipes: parseTags assumes it can access the next chunk of data purely, while pipes/conduit let you handle situations where that's impossible, such as streaming from a file. In order to mix parsing into pipes/conduit you must have a way to mix consuming a parse into steps which pull new chunks of data.
(I'll use pipes in the sequel because I'm more familiar with them, but the idea is transferable.)
We can see this disconnect in the types, though I'll begin with a slightly restricted version.
parseTags :: Lazy.ByteString -> [Tag Lazy.ByteString]
We can think of Lazy.ByteString as streaming apparatus all by itself, it is, after all, essentially just
type LazyByteString = [Strict.ByteString]
such that if we were generating the Lazy.ByteString ourselves then we could rely on the laziness of lists to ensure that we don't generate more than what parseTags needed in order to proceed (I'll assume, without looking, that parseTags is written so that it could incrementally parse a streaming structure like that).
sillyGen :: LazyByteString
sillyGen = gen 10 where
gen 0 = []
gen n = "<tag> </tag>" : gen (n-1)
Now the problem here is that the streaming behavior of a list depends crucially upon being able to generate the tail of the list purely. In the discussion so far there hasn't been any mention of a monad at all. Unfortunately, that cannot be true with a string being streamed from a file---we need to somehow integrate an IO action between each streamed chunk where we consider whether or not we've reached EOF and close the file as necessary.
This is exactly the realm of pipes and conduit, so let's look at what that do to solve that issue.
-- from pipes-bytestring
fromHandle :: Handle -> Producer' Strict.ByteString IO ()
We can think of fromHandle as being the "monadically-interwoven" equivalent to
Lazy.hGetContents :: Handle -> IO Lazy.ByteString
The types suggest a crucial difference between these two operations--hGetContents can be executed in exactly one IO action while when we pass a Handle to pipes-bytestring's fromHandle it returns a type which is parameterized over IO but cannot be simply freed from it. This is exactly indicative of hGetContents using lazy IO (which can be unpredictable due to the use of unsafeInterleaveIO) while fromHandle uses deterministic streaming.
We can write a type similar to Producer Strict.ByteString IO () as
data IOStreamBS = IOSBS { stepStream :: IO (Strict.ByteString, Either IOStreamBS ()) }
In other words we can think of Producer Strict.ByteString IO () as not much more than an IO action which produces exactly the next chunk of the file and (possibly) a new action to get the next chunk. This is how pipes and conduit provide deterministic streaming.
But it also means that you cannot escape from the IO in one fell swoop—you have to carry it around.
We might thus want to adjust parseTags, which is capable of some generalization over its input, to just accept Producer Strict.ByteString IO () as a StringLike type
parseTags :: StringLike str => str -> [Tag str]
Let's assume for argument that we've instantiated StringLike (Producer Strict.ByteString IO ()). That would mean that applying parseTags to our producer would provide us with a list of Tag (Producer Strict.ByteString IO ()).
type DetStream = Producer Strict.ByteString IO ()
parseTags :: DetStream -> [Tag DetStream]
For this to happen we would have had to peek into our Producer and cut it up into chunks without executing anything in the IO monad. By this point it should be clear that such a function is impossible---we couldn't even get the first chunk from the file without doing something in IO.
To remedy this situation, systems like pipes-parse and pipes-group have arisen which replace the function signature with something more like
parseTagsGrouped :: Producer Strict.ByteString IO ()
-> FreeT (Producer (Tag Strict.ByteString) IO) IO ()
which is scary looking but serves an identical purpose to parseTags except that it generalizes the list to a structure which allows us to execute arbitrary IO actions between each element. This kind of transformation, as the type shows, can be done purely and thus allows us to assemble our streaming machinery using pure combinations and only incur an IO step when we execute it at the end (using runEffect).
So, all said and done, it's probably not going to be possible to use pipes or conduit to stream to parseTags---it simply assumes that certain transformations can be done purely, pushing all the IO to one point in time, while pipes/conduit are basically mechanisms for spreading IO throughout a computation without too much mental overhead.
If you're stuck using parseTags, however, you can get by using lazy IO as long as you're careful. Try a few variations with hGetContents from Data.ByteString.Lazy. The primary problem will be that the file may close prior to the unsafeInterleaveIO'd operations actually getting around to reading it. You'll thus need to manage strictness very carefully.
Essentially that's the big difference between pipes/conduit and lazy IO. When using lazy IO, all of the "read a chunk" operations are made invisible and implicitly controlled by Haskell laziness. This is dynamic, implicit, and tough to observe or predict. In pipes/conduit all of this motion is made extraordinarily explicit and static, but it's up to you to manage the complexity.
What if you try System.IO, read the file line by line and process it(or read parts of the xml file)?
I have the following snippet of code, which I pass to withFile:
text <- hGetContents hand
let code = parseCode text
return code
Here hand is a valid file handle, opened with ReadMode and parseCode is my own function that reads the input and returns a Maybe. As it is, the function fails and returns Nothing. If, instead I write:
text <- hGetContents hand
putStrLn text
let code = parseCode text
return code
I get a Just, as I should.
If I do openFile and hClose myself, I have the same problem. Why is this happening? How can I cleanly solve it?
Thanks
hGetContents isn't too lazy, it just needs to be composed with other things appropriately to get the desired effect. Maybe the situation would be clearer if it were were renamed exposeContentsToEvaluationAsNeededForTheRestOfTheAction or just listen.
withFile opens the file, does something (or nothing, as you please -- exactly what you require of it in any case), and closes the file.
It will hardly suffice to bring out all the mysteries of 'lazy IO', but consider now this difference in bracketing
good file operation = withFile file ReadMode (hGetContents >=> operation >=> print)
bad file operation = (withFile file ReadMode hGetContents) >>= operation >>= print
-- *Main> good "lazyio.hs" (return . length)
-- 503
-- *Main> bad "lazyio.hs" (return . length)
-- 0
Crudely put, bad opens and closes the file before it does anything; good does everything in between opening and closing the file. Your first action was akin to bad. withFile should govern all of the action you want done that that depends on the handle.
You don't need a strictness enforcer if you are working with String, small files, etc., just an idea how the composition works. Again, in bad all I 'do' before closing the file is exposeContentsToEvaluationAsNeededForTheRestOfTheAction. In good I compose exposeContentsToEvaluationAsNeededForTheRestOfTheAction with the rest of the action I have in mind, then close the file.
The familiar length + seq trick mentioned by Patrick, or length + evaluate is worth knowing; your second action with putStrLn txt was a variant. But reorganization is better, unless lazy IO is wrong for your case.
$ time ./bad
bad: Prelude.last: empty list
-- no, lots of Chars there
real 0m0.087s
$ time ./good
'\n' -- right
()
real 0m15.977s
$ time ./seqing
Killed -- hopeless, attempting to represent the file contents
real 1m54.065s -- in memory as a linked list, before finding out the last char
It goes without saying that ByteString and Text are worth knowing about, but reorganization with evaluation in mind is better, since even with them the Lazy variants are often what you need, and they then involve grasping the same distinctions between forms of composition. If you are dealing with one of the (immense) class of cases where this sort of IO is inappropriate, take a look at enumerator, conduit and co., all wonderful.
hGetContents uses lazy IO; it only reads from the file as you force more of the string, and it only closes the file handle when you evaluate the entire string it returns. The problem is that you're enclosing it in withFile; instead, just use openFile and hGetContents directly (or, more simply, readFile). The file will still get closed once you fully evaluate the string. Something like this should do the trick, to ensure that the file is fully read and closed immediately by forcing the entire string beforehand:
import Control.Exception (evaluate)
readCode :: FilePath -> IO Code
readCode fileName = do
text <- readFile fileName
evaluate (length text)
return (parseCode text)
Unintuitive situations like this are one of the reasons people tend to avoid lazy IO these days, but unfortunately you can't change the definition of hGetContents. A strict IO version of hGetContents is available in the strict package, but it's probably not worth depending on the package just for that one function.
If you want to avoid the overhead that comes from traversing the string twice here, then you should probably look into using a more efficient type than String, anyway; the Text type has strict IO equivalents for much of the String-based IO functionality, as does ByteString (if you're dealing with binary data, rather than Unicode text).
You can force the contents of text to be evaluated using
length text `seq` return code
as the last line.
I have a question about the best way to design a program I'm working on in Haskell. I'm writing a physics simulator, which is something I've done a bunch in standard imperative languages, and usually the main method looks something like:
while True:
simulationState = stepForward(simulationState)
render(simulationState)
And I'm wondering how to do something similar in Haskell. I have a function step :: SimState -> SimState and a function display :: SimState -> IO () that uses HOpenGL to draw a simulation state, but I'm at a loss as to how to do this in a "loop" of sorts, as all of the solutions I can come up with involve some sort of mutability. I'm a bit of a noob when it comes to Haskell, so it's entirely possible that I'm missing a very obvious design decision. Also, if there's a better way to architect my program as a whole, I'd be glad to hear it.
Thanks in advance!
In my opinion, the right way to think about this problem is not as a loop, but as a list or other such infinite streaming structure. I gave a similar answer to a similar question; the basic idea is, as C. A. McCann wrote, to use iterate stepForward initialState, where iterate :: (a -> a) -> a -> [a] “returns an infinite list of repeated applications of [stepForward] to [initialState]”.
The problem with this approach is that you have trouble dealing with a monadic step, and in particular a monadic rendering function. One approach would just be to take the desired chunk of the list in advance (possibly with a function like takeWhile, possibly with manual recursion) and then mapM_ render on that. A better approach would be to use a different, intrinsically monadic, streaming structure. The four that I can think of are:
The iteratee package, which was originally designed for streaming IO. I think here, your steps would be a source (enumerator) and your rendering would be a sink (iteratee); you could then use a pipe (an enumeratee) to apply functions and/or do filtering in the middle.
The enumerator package, based on the same ideas; one might be cleaner than the other.
The newer pipes package, which bills itself as “iteratees done right”—it's newer, but the semantics are, at least to me, significantly clearer, as are the names (Producer, Consumer, and Pipe).
The List package, in particular its ListT monad transformer. This monad transformer is designed to allow you to create lists of monadic values with more useful structure than [m a]; for instance, working with infinite monadic lists becomes more manageable. The package also generalizes many functions on lists into a new type class. It provides an iterateM function twice; the first time in incredible generality, and the second time specialized to ListT. You can then use functions such as takeWhileM to do your filtering.
The big advantage to reifying your program’s iteration in some data structure, rather than simply using recursion, is that your program can then do useful things with control flow. Nothing too grandiose, of course, but for instance, it separates the “how to terminate” decision from the “how to generate” process. Now, the user (even if it's just you) can separately decide when to stop: after n steps? After the state satisfies a certain predicate? There's no reason to bog down your generating code with these decisions, as it's logically a separate concern.
Well, if drawing successive states is all you want to do, that's pretty simple. First, take your step function and the initial state and use the iterate function. iterate step initialState is then an (infinite) list of each simulation state. You can then map display over that to get IO actions to draw each state, so together you'd have something like this:
allStates :: [SimState]
allStates = iterate step initialState
displayedStates :: [IO ()]
displayedStates = fmap display allStates
The simplest way to run it would be to then use the intersperse function to put a "delay" action between each display action, then use the sequence_ function to run the whole thing:
main :: IO ()
main = sequence_ $ intersperse (delay 20) displayedStates
Of course that means you have to forcibly terminate the application and precludes any sort of interactivity, so it's not really a good way to do it in general.
A more sensible approach would be to interleave things like "seeing if the application should exit" at each step. You can do that with explicit recursion:
runLoop :: SimState -> IO ()
runLoop st = do display st
isDone <- checkInput
if isDone then return ()
else delay 20 >> runLoop (step st)
My preferred approach is to write non-recursive steps instead and then use a more abstract loop combinator. Unfortunately there's not really good support for doing it that way in the standard libraries, but it would look something like this:
runStep :: SimState -> IO SimState
runStep st = do display st
delay 20
return (step st)
runLoop :: SimState -> IO ()
runLoop initialState = iterUntilM_ checkInput runStep initialState
Implementing the iterUntilM_ function is left as an exercise for the reader, heh.
Your approach is ok, you just need to remember that loops are expressed as recursion in Haskell:
simulation state = do
let newState = stepForward state
render newState
simulation newState
(But you definietly need a criterion how to end the loop.)
I'm reading 512^2 whitespace delimited doubles written in a text file to my Erlang program by piping them to stdin.
In Erlang this takes 2m25s, in an equivalent Haskell program it takes 3s, so I must be going against the Erlang way of doing it in some way.
Am I using Erlang's IO primitives in a stupid way, or is there something else wrong with my program?
Note that I don't care about the order of the values in the resulting list, so no reverse operation.
Erlang:
-module(iotest).
-import(io).
-export([main/0]).
main() ->
Values = read(),
io:write(Values).
read() -> read([]).
read(Acc) ->
case io:fread("", "~f") of
{ok, Value} -> read([Value | Acc]);
eof -> Acc
end.
Haskell:
module IOTest (
main
) where
main :: IO ()
main = do
text <- getContents
let values = map read (words text) :: [Double]
putStrLn $ show values
return ()
Thanks very much for any help.
No, you are not using Erlang IO in stupid way. It's problem with Erlang IO which is not well known to be fast. Erlang is widely used for writing servers so socked oriented IO is excellent tuned. Block oriented file IO is not so bad, but using io module for working with stdin doesn't work well. Erlang is not widely used for this kind of work. If you need this kind of operations you should write your own specialized input routine. You have two options there:
use io for reading from file in raw and binary mode and then split input using binary module and then use list_to_float/1 for conversion.
use specialized port oriented stdin reading routine (as you can see for example in http://shootout.alioth.debian.org/u64q/program.php?test=regexdna&lang=hipe&id=7 note read/0 function and -noshell -noinput parameters for vm invocation) and then continue as in first option.
In mine opinion (and from mine previous experience) biggest impact in your case comes from using scan-like input routine for float decoding seconded by slow (repeated) io invocation, but it would need some nontrivial profiling to prove it.
I'm trying a little experiment in haskell, wondering if it is possible to exploit laziness to process IO. I'd like to write a function that takes a String (a list of Chars) and produces a string, lazily. I would like then to be abily to lazily feed it characters from IO, so each character would be processed as soon as it was available, and the output would be produced as the characters necessary became available. However, I'm not quite sure if/how I can produce a lazy list of characters from input inside the IO monad.
Regular String IO in Haskell is lazy. So your example should just work out of the box.
Here's an example, using the 'interact' function, which applies a function to a lazy stream of characters:
interact :: (String -> String) -> IO ()
Let's filter out the letter 'e' from the input stream, lazily (i.e. run in constant space):
main = interact $ filter (/= 'e')
You could also use getContents and putStr if you like. They're all lazy.
Running it to filter the letter 'e' from the dictionary:
$ ghc -O2 --make A.hs
$ ./A +RTS -s < /usr/share/dict/words
...
2 MB total memory in use (0 MB lost due to fragmentation)
...
so we see that it ran in a constant 2M footprint.
The simplest method of doing lazy IO involves functions such as interact, readFile, hGetContents, and such, as dons says; there's a more extended discussion of these in the book Real World Haskell that you might find useful. If memory serves me, all such functions are eventually implemented using the unsafeInterleaveIO that ephemient mentions, so you can also build your own functions that way if you want.
On the other hand, it might be wise to note that unsafeInterleaveIO is exactly what it says on the tin: unsafe IO. Using it--or functions based on it--breaks purity and referential transparency. This allows apparently pure functions (that is, that do not return an IO action) to effect the outside world when evaluated, produce different results from the same arguments, and all those other unpleasant things. In practice, most sensible ways of using unsafeInterleaveIO won't cause problems, and simple mistakes will usually result in obvious and easily diagnosed bugs, but you've lost some nice guarantees.
There are alternatives, of course; you can find assorted libraries on Hackage that provide restricted, safer lazy IO or conceptually different approaches. However, given that problems arise only rarely in practical use, I think most people are inclined to stick with the built-in, technically unsafe functions.
unsafeInterleaveIO :: IO a -> IO a
unsafeInterleaveIO allos IO computation to be deferred lazily. When passed a value of type IO a, the IO will only be performed when the value of a is demanded. This is used to implement lazy file reading, see System.IO.hGetContents.
For example, main = getContents >>= return . map Data.Char.toUpper >>= putStr is lazy; as you feed characters to stdin, you will get characters on stdout.
(This is the same as writing main = interact $ map Data.Char.toUpper, as in dons's answer.)