I'm trying to process a very large unicode text file (6GB+). What I want is to count the frequency of each unique word. I use a strict Data.Map to keep track of the counts of each word as I traverse the file.
The process takes too much time and too much memory (20GB+). I suspect the Map is huge but I'm not sure it should reach 5x the size of the file!
The code is shown below. Please note that I tried the following:
Using Data.HashMap.Strict instead of Data.Map.Strict. Data.Map seems to perform better in terms of slower memory consumption increase rate.
Reading the files using lazy ByteString instead of lazy Text. And then I encode it to Text do some processing and then encode it back to ByteString for IO.
import Data.Text.Lazy (Text(..), cons, pack, append)
import qualified Data.Text.Lazy as T
import qualified Data.Text.Lazy.IO as TI
import Data.Map.Strict hiding (foldr, map, foldl')
import System.Environment
import System.IO
import Data.Word
dictionate :: [Text] -> Map Text Word16
dictionate = fromListWith (+) . (`zip` [1,1..])
main = do
[file,out] <- getArgs
h <- openFile file ReadMode
hO <- openFile out WriteMode
mapM_ (flip hSetEncoding utf8) [h,hO]
txt <- TI.hGetContents h
TI.hPutStr hO . T.unlines .
map (uncurry ((. cons '\t' . pack . show) . append)) .
toList . dictionate . T.words $ txt
hFlush hO
mapM_ hClose [h,hO]
print "success"
What's wrong with my approach? What's the best way to accomplish what I'm trying to do in terms of time and memory performance?
This memory usage is expected. Data.Map.Map consumes about 6N words of memory + size of keys & values (data taken from this excellent post by Johan Tibell). A lazy Text value takes up 7 words + 2*N bytes (rounded to the multiple of the machine word size), and a Word16 takes up two words (header + payload). We will assume a 64-bit machine, so the word size will be 8 bytes. We will also assume that the average string in the input is 8 characters long.
Taking this all into account, the final formula for the memory usage is 6*N + 7*N + 2*N + 2*N words.
In the worst case, all words will be different and there will be about (6 * 1024^3)/8 ~= 800 * 10^6 of them. Plugging that in the formula above we get the worst-case map size of approx. 102 GiB, which seems to agree with the experimental results. Solving this equation in the reverse direction tells us that your file contains about 200*10^6 different words.
As for alternative approaches to this problem, consider using a trie (as suggested by J.Abrahamson in the comments) or an approximate method, such as count-min sketch.
In the world of traditional data processing, this problem would have been done by sorting (externally on disk or magtape if needed), then scanning the sorted file to count the grouped-together runs of words. Of course you could do some partial reductions during the early phases of sorting, to save some space and time.
Related
Hello i am having problems reading after saving and appending a List of Tuple Lists inside a File.
Saving something into a File works without problems.
I am saving into a file with
import qualified Data.ByteString as BS
import qualified Data.Serialize as S (decode, encode)
import Data.Either
toFile path = do
let a = take 1000 [100..] :: [Float]
let b = take 100 [1..] :: [Float]
BS.appendFile path $ S.encode (a,b)
and reading with
fromFile path = do
bstr<-BS.readFile path
let d = S.decode bstr :: Either String ([Float],[Float])
return (Right d)
but reading from that file with fromFileonly gives me 1 Element of it although i append to that file multiple times.
Since im appending to the file it should have multiple Elements inside it so im missing something like map on my fromFile function but i couldnt work out how.
I appreciate any help or any other solutions so using Data.Serialize and ByteString is not a must. Other possibilities i thought of are json files with Data.Aeson if i cant get it to work with Serialize
Edit :
I realized that i made a mistake on the decoding type in fromFile
let d = S.decode bstr :: Either String ([Float],[Float])
it should be like this
let d = S.decode bstr :: Either String [([Float],[Float])]
The Problem In Brief The default format used by serialize (or binary) encoding isn't trivially append-able.
The Problem (Longer)
You say you appended:
S.encode (a,b)
to the same file "multiple times". So the format of the file is now:
[ 64 bit length field | # floats encoded | 64 length field | # floats encoded ]
Repeated however many times you appended to the file. That is, each append will add new length fields and list of floats while leaving the old values in place.
After that you returned to read the file and decode some floats using, morally, S.decode <$> BS.readFile path. This will decode the first two lists of floats by first reading the length field (of the first time you wrote to the file) then the following floats and the second length field followed by its related floats. After reading the stated length worth of floats the decoder will stop.
It should now be clear that just because you appended more data does not make your encoding or decoding script look for any additional data. The default format used by serialize (or binary) encoding isn't trivially append-able.
Solutions
You mentioned switching to Aeson, but using JSON to encode instead of binary won't help you. Decoding two appended JSON strings like { "first": [1], "second": [2]}{ "first": [3], "second": [4]} is logically the same as your current problem. You have some unknown number of interleaved chunks of lists - just write a decoder to keep trying:
import Data.Serialize as S
import Data.Serialize.Get as S
import Data.ByteString as BS
fromFile path = do
bstr <- BS.readFile path
let d = S.runGet getMultiChunks bstr :: Either String ([Float],[Float])
return (Right d)
getMultiChunks :: Get ([Float],[Float])
getMultiChunks = go ([], [])
where
go (l,r) = do
b <- isEmpty
if b then pure ([],[])
else do (lNext, rNext) <- S.get
go (l ++ lNext, r ++ rNext) -- inefficient
So we've written our own getter (untested) that will look to see if byte remain and if so decode another pair of lists of floats. Each time it decodes a new chunk it prepends the old chunk (which is inefficient, use something like a dlist if you want it to be respectable).
I was wondering if there's an easy way to get lines one at a time out of a file without eventually loading the whole file in memory. I'd like to do a fold over the lines with an attoparsec parser. I tried using Data.Text.Lazy.IO with hGetLine and that blows through my memory. I read later that eventually loads the whole file.
I also tried using pipes-text with folds and view lines:
s <- Pipes.sum $
folds (\i _ -> (i+1)) 0 id (view Text.lines (Text.fromHandle handle))
print s
to just count the number of lines and it seems to be doing some wonky stuff "hGetChunk: invalid argument (invalid byte sequence)" and it takes 11 minutes where wc -l takes 1 minute. I heard that pipes-text might have some issues with gigantic lines? (Each line is about 1GB)
I'm really open to any suggestions, can't find much searching except for newbie readLine how-tos.
Thanks!
The following code uses Conduit, and will:
UTF8-decode standard input
Run the lineC combinator as long as there is more data available
For each line, simply yield the value 1 and discard the line content, without ever read the entire line into memory at once
Sum up the 1s yielded and print it
You can replace the yield 1 code with something which will do processing on the individual lines.
#!/usr/bin/env stack
-- stack --resolver lts-8.4 --install-ghc runghc --package conduit-combinators
import Conduit
main :: IO ()
main = (runConduit
$ stdinC
.| decodeUtf8C
.| peekForeverE (lineC (yield (1 :: Int)))
.| sumC) >>= print
This is probably easiest as a fold over the decoded text stream
{-#LANGUAGE BangPatterns #-}
import Pipes
import qualified Pipes.Prelude as P
import qualified Pipes.ByteString as PB
import qualified Pipes.Text.Encoding as PT
import qualified Control.Foldl as L
import qualified Control.Foldl.Text as LT
main = do
n <- L.purely P.fold (LT.count '\n') $ void $ PT.decodeUtf8 PB.stdin
print n
It takes about 14% longer than wc -l for the file I produced which was just long lines of commas and digits. IO should properly be done with Pipes.ByteString as the documentation says, the rest is conveniences of various sorts.
You can map an attoparsec parser over each line, distinguished by view lines, but keep in mind that an attoparsec parser can accumulate the whole text as it pleases and this might not be a great idea over a 1 gigabyte chunk of text. If there is a repeated figure on each line (e.g. word separated numbers) you can use Pipes.Attoparsec.parsed to stream them.
This is an excerpt of a file.csv file with some tabular data
John,23,Paris
Alban,28,London
Klaus,27,Berlin
Hans,29,Stockholm
Julian,25,Paris
Jonathan,26,Lyon
Albert,27,London
The column headers for this file would be
firstName, age, city
This file is loaded in ghci like this
𝛌> :m + Data.List Data.Function Data.List.Split
𝛌> contents <- readFile "file.csv"
𝛌> let t = map (splitOn ",") $ lines contents
𝛌> mapM print $ take 3 t
["John","23","Paris"]
["Alban","28","London"]
["Klaus","27","Berlin"]
[(),(),()]
Now, if I want to add a birthYear column to those 3 columns, I can do
𝛌> let getYear str = show $ 2016 - read str
𝛌> let withYear = map (\(x:xs) -> x : xs ++ [getYear (head xs)]) t
𝛌> mapM print $ take 3 withYear
["John","23","Paris","France","1993"]
["Alban","28","London","UK","1988"]
["Klaus","27","Berlin","Germany","1989"]
[(),(),()]
This works well but what bothers me is that the getYear function has type String -> String and as such, type checking is pretty much useless here.
I could easily convert t into a list of tuples like ("John", 23, "Paris") but what about if I have not 3, but 300 features (which is not that uncommon in machine learning problems)?
What would be the best way to deal with different column types? Using tuples? Using maps?
In case of a big number of columns, is there a way to make Haskell infer the column's types? For instance, it would detect that column 2 in the above example is Int, and the others are strings?
Concerning column headers, would there be a way that one could simply access the columns by label instead of by index, so that getYear could be something like 2016 - column['age'] (Python example)?
I'm used to Python's Pandas DataFrames which perform all this stuff automagically, but Haskell looks like it could perform a ton of it natively. Not sure how to do this however as of now.
Using BioHaskell, how can I read a FASTA file containing aminoacid sequences?
I want to be able to:
Get a list of String sequences
Get a Map String String (from Data.Map ) from the FASTA comment (assumed to be unique) to the sequence String
Use the sequences in algorithms implemented in BioHaskell.
Note: This question intentionally does not show research effort as it was immediately answered in a Q&A-style manner.
Extracting raw sequence strings
We will assume from now on that the file aa.fa contains some aminoacid FASTA sequences. Let's start with a simple example that extracts a list of sequences.
import Bio.Sequence.Fasta (readFasta)
import Bio.Sequence.SeqData (seqdata)
import qualified Data.ByteString.Lazy.Char8 as LB
main = do
sequences <- readFasta "aa.fa"
let listOfSequences = map (LB.unpack . seqdata) sequences :: [String]
-- Just for show, we will print one sequence per line here
-- This will basically execute putStrLn for each sequence
mapM_ putStrLn listOfSequences
readFasta returns IO [Sequence Unknown]. Basically that means there is no information about whether the sequences contain Aminoacids or nucleotides.
Note that we use LB.unpack instead of show here, because show adds double quotes (") at the beginning and the end of the resulting String. Using LB.unpack works, because in the current BioHaskell version 0.5.3., SeqData is just defined as lazy ByteString.
We can fix this by using castToAmino or castToNuc:
Converting to AA/Nucleotide sequences
let aaSequences = map castToAmino sequences :: [Sequence Amino]
Note that those function currently (BioHaskell version 0.5.3) do not perform any validity checks. You can use the [Sequence Amino] or [Sequence Nuc] in the BioHaskell algorithms.
Lookup sequence by FASTA header
We will now assume that our aa.fa contains a sequence
>abc123
MGLIFARATNA...
Now, we will build a Map String String (we will use Data.Map.Strict in this example) from the FASTA file. We can use this map to lookup the sequence.
The lookup will yield a Maybe String. The intended behaviour in this example is to print the sequence if it was found, or not to print anything if nothing was found in the Map.
As Data.Maybe is a Monad, we can use Data.Foldable.mapM_ for this task.
import Bio.Sequence.Fasta (readFasta)
import Bio.Sequence.SeqData (Sequence, seqdata, seqheader)
import qualified Data.ByteString.Lazy.Char8 as LB
import Data.Foldable (mapM_)
import qualified Data.Map.Strict as Map
-- | Convert a Sequence to a String tuple (sequence label, sequence)
sequenceToMapTuple :: Sequence a -> (String, String)
sequenceToMapTuple s = (LB.unpack $ seqheader s, LB.unpack $ seqdata s)
main = do
sequences <- readFasta "aa.fa"
-- Build the sequence map (by header)
let sequenceMap = Map.fromList $ map sequenceToMapTuple sequences
-- Lookup the sequence for the "abc123" header
mapM_ print $ Map.lookup "abc123" sequenceMap
Edit: Thanks to #GabrielGonzalez suggestion, the final example now uses Data.Foldable.mapM_ instead of Data.Maybe.fromJust
question
I want a program that will write a sequence like,
1
...
10000000
to a file. What's the simplest code one can write, and get decent performance? My intuition is that there is some lack-of-buffering problem. My C code runs at 100 MB/s, whereas by reference the Linux command line utility dd runs at 9 GB/s 3 GB/s (sorry for the imprecision, see comments -- I'm more interested in the big picture orders-of-magnitude though).
One would think this would be a solved problem by now ... i.e. any modern compiler would make it immediate to write such programs that perform reasonably well ...
C code
#include <stdio.h>
int main(int argc, char **argv) {
int len = 10000000;
for (int a = 1; a <= len; a++) {
printf ("%d\n", a);
}
return 0;
}
I'm compiling with clang -O3. A performance skeleton which calls putchar('\n') 8 times gets comparable performance.
Haskell code
A naiive Haskell implementation runs at 13 MiB/sec, compiling with ghc -O2 -optc-O3 -optc-ffast-math -fllvm -fforce-recomp -funbox-strict-fields. (I haven't recompiled my libraries with -fllvm, perhaps I need to do that.) Code:
import Control.Monad
main = forM [1..10000000 :: Int] $ \j -> putStrLn (show j)
My best stab with Haskell runs even slower, at 17 MiB/sec. The problem is I can't find a good way to convert Vector's into ByteString's (perhaps there's a solution using iteratees?).
import qualified Data.Vector.Unboxed as V
import Data.Vector.Unboxed (Vector, Unbox, (!))
writeVector :: (Unbox a, Show a) => Vector a -> IO ()
writeVector v = V.mapM_ (System.IO.putStrLn . show) v
main = writeVector (V.generate 10000000 id)
It seems that writing ByteString's is fast, as demonstrated by this code, writing an equivalent number of characters,
import Data.ByteString.Char8 as B
main = B.putStrLn (B.replicate 76000000 '\n')
This gets 1.3 GB/s, which isn't as fast as dd, but obviously much better.
Some completely unscientific benchmarking first:
All programmes have been compiled with the default optimisation level (-O3 for gcc, -O2 for GHC) and run with
time ./prog > outfile
As a baseline, the C programme took 1.07s to produce a ~76MB (78888897 bytes) file, roughly 70MB/s throughput.
The "naive" Haskell programme (forM [1 .. 10000000] $ \j -> putStrLn (show j)) took 8.64s, about 8.8MB/s.
The same with forM_ instead of forM took 5.64s, about 13.5MB/s.
The ByteString version from dflemstr's answer took 9.13s, about 8.3MB/s.
The Text version from dflemstr's answer took 5.64s, about 13.5MB/s.
The Vector version from the question took 5.54s, about 13.7MB/s.
main = mapM_ (C.putStrLn . C.pack . show) $ [1 :: Int .. 10000000], where C is Data.ByteString.Char8, took 4.25s, about 17.9MB/s.
putStr . unlines . map show $ [1 :: Int .. 10000000] took 3.06s, about 24.8MB/s.
A manual loop,
main = putStr $ go 1
where
go :: Int -> String
go i
| i > 10000000 = ""
| otherwise = shows i . showChar '\n' $ go (i+1)
took 2.32s, about 32.75MB/s.
main = putStrLn $ replicate 78888896 'a' took 1.15s, about 66MB/s.
main = C.putStrLn $ C.replicate 78888896 'a' where C is Data.ByteString.Char8, took 0.143s, about 530MB/s, roughly the same figures for lazy ByteStrings.
What can we learn from that?
First, don't use forM or mapM unless you really want to collect the results. Performancewise, that sucks.
Then, ByteString output can be very fast (10.), but if the construction of the ByteString to output is slow (3.), you end up with slower code than the naive String output.
What's so terrible about 3.? Well, all the involved Strings are very short. So you get a list of
Chunk "1234567" Empty
and between any two such, a Chunk "\n" Empty is put, then the resulting list is concatenated, which means all these Emptys are tossed away when a ... (Chunk "1234567" (Chunk "\n" (Chunk "1234568" (...)))) is built. That's a lot of wasteful construct-deconstruct-reconstruct going on. Speed comparable to that of the Text and the fixed "naive" String version can be achieved by packing to strict ByteStrings and using fromChunks (and Data.List.intersperse for the newlines). Better performance, slightly better than 6., can be obtained by eliminating the costly singletons. If you glue the newlines to the Strings, using \k -> shows k "\n" instead of show, the concatenation has to deal with half as many slightly longer ByteStrings, which pays off.
I'm not familiar enough with the internals of either text or vector to offer more than a semi-educated guess concerning the reasons for the observed performance, so I'll leave them out. Suffice it to say that the performance gain is marginal at best compared to the fixed naive String version.
Now, 6. shows that ByteString output is faster than String output, enough that in this case the additional work of packing is more than compensated. However, don't be fooled by that to believe that is always so. If the Strings to pack are long, the packing can take more time than the String output.
But ten million invocations of putStrLn, be it the String or the ByteString version, take a lot of time. It's faster to grab the stdout Handle just once and construct the output String in non-IO code. unlines already does well, but we still suffer from the construction of the list map show [1 .. 10^7]. Unfortunately, the compiler didn't manage to eliminate that (but it eliminated [1 .. 10^7], that's already pretty good). So let's do it ourselves, leading to 8. That's not too terrible, but still takes more than twice as long as the C programme.
One can make a faster Haskell programme by going low-level and directly filling ByteStrings without going through String via show, but I don't know if the C speed is reachable. Anyway, that low-level code isn't very pretty, so I'll spare you what I have, but sometimes one has to get one's hands dirty if speed matters.
Using lazy byte strings gives you some buffering, because the string will be written instantly and more numbers will only be produced as they are needed. This code shows the basic idea (there might be some optimizations that could be made):
import qualified Data.ByteString.Lazy.Char8 as ByteString
main =
ByteString.putStrLn .
ByteString.intercalate (ByteString.singleton '\n') .
map (ByteString.pack . show) $
([1..10000000] :: [Int])
I still use Strings for the numbers here, which leads to horrible slowdowns. If we switch to the text library instead of the bytestring library, we get access to "native" show functions for ints, and can do this:
import Data.Monoid
import Data.List
import Data.Text.Lazy.IO as Text
import Data.Text.Lazy.Builder as Text
import Data.Text.Lazy.Builder.Int as Text
main :: IO ()
main =
Text.putStrLn .
Text.toLazyText .
mconcat .
intersperse (Text.singleton '\n') .
map Text.decimal $
([1..10000000] :: [Int])
I don't know how you are measuring the "speed" of these programs (with the pv tool?) but I imagine that one of these procedures will be the fastest trivial program you can get.
If you are going for maximum performance, then it helps to take a holistic view; i.e., you want to write a function that maps from [Int] to series of system calls that write chunks of memory to a file.
Lazy bytestrings are good representation for a sequence of chunks of memory. Mapping a lazy bytestring to a series of systems calls that write chunks of memory is what L.hPut is doing (assuming an import qualified Data.ByteString.Lazy as L). Hence, we just need a means to efficiently construct the corresponding lazy bytestring. This is what lazy bytestring builders are good at. With the new bytestring builder (here is the API documentation), the following code does the job.
import qualified Data.ByteString.Lazy as L
import Data.ByteString.Lazy.Builder (toLazyByteString, charUtf8)
import Data.ByteString.Lazy.Builder.ASCII (intDec)
import Data.Foldable (foldMap)
import Data.Monoid (mappend)
import System.IO (openFile, IOMode(..))
main :: IO ()
main = do
h <- openFile "/dev/null" WriteMode
L.hPut h $ toLazyByteString $
foldMap ((charUtf8 '\n' `mappend`) . intDec) [1..10000000]
Note that I output to /dev/null to avoid interference by the disk driver. The effort of moving the data to the OS remains the same. On my machine, the above code runs in 0.45 seconds, which is 12 times faster than the 5.4 seconds of your original code. This implies a throughput of 168 MB/s. We can squeeze out an additional 30% speed (220 MB/s) using bounded encodings].
import qualified Data.ByteString.Lazy.Builder.BasicEncoding as E
L.hPut h $ toLazyByteString $
E.encodeListWithB
((\x -> (x, '\n')) E.>$< E.intDec `E.pairB` E.charUtf8)
[1..10000000]
Their syntax looks a bit quirky because a BoundedEncoding a specifies the conversion of a Haskell value of type a to a bounded-length sequence of bytes such that the bound can be computed at compile-time. This allows functions such as E.encodeListWithB to perform some additional optimizations for implementing the actual filling of the buffer. See the the documentation of Data.ByteString.Lazy.Builder.BasicEncoding in the above link to the API documentation (phew, stupid hyperlink limit for new users) for more information.
Here is the source of all my benchmarks.
The conclusion is that we can get very good performance from a declarative solution provided that we understand the cost model of our implementation and use the right datastructures. Whenever constructing a packed sequence of values (e.g., a sequence of bytes represented as a bytestring), then the right datastructure to use is a bytestring Builder.