Abnormally slow Haskell code - haskell

I've been trying to practice with the Digits-Recognizer Dojo in Haskell after having done it in F#. I'm getting results, but for some reason my Haskell code is insanely slow, and I cannot seem to find what's wrong.
Here is my code (the .csv files can be found on the Dojo's GitHub):
import Data.Char
import Data.List
import Data.List.Split
import Data.Ord
import System.IO
type Pixels = [Int]
data Digit = Digit { label :: Int, pixels :: Pixels }
distance :: Pixels -> Pixels -> Float
distance d1 d2 = sqrt . sum $ map pointDistance $ zip d1 d2
where pointDistance (a, b) = fromIntegral $ (a - b) * (a - b)
parseDigit :: String -> Digit
parseDigit s = Digit label pixels
where (label:pixels) = map read $ splitOn "," s
identify :: Digit -> [Digit] -> (Digit, Float)
identify digit training = minimumBy (comparing snd) distances
where distances = map fn training
fn ref = (ref, distance (pixels digit) (pixels ref))
readDigits :: String -> IO [Digit]
readDigits filename = do
fileContent <- readFile filename
return $ map parseDigit $ tail $ lines fileContent
main :: IO ()
main = do
trainingSample <- readDigits "trainingsample.csv"
validationSample <- readDigits "validationsample.csv"
let result = [(d, identify d trainingSample) | d <- validationSample]
fmt (d, (ref, dist)) = putStrLn $ "Found..."
mapM_ fmt result
What would be the reason of these bad performances?
[UPDATE] Thank you for your many ideas! I have switched my usage of String to Data.Text and my usage of List to Data.Vector as suggested, unfortunately the result is still far from satisfactory.
My updated code is available here.
To give you a better understanding of my interrogation, here's the output of my Haskell (left) and F# (right) implementation. I'm a total newbie of both languages, so I sincerely believe that there has to be a major mistake in my Haskell version to be that much slower.

If you're patient, you'll notice that the second result is calculated much faster than the first. That's because your implementation takes some time to read in the csv files.
You may be tempted to stick a print statement to see when it's done loading like so:
main = do
trainingSample <- readDigits "trainingsample.csv"
validationSample <- readDigits "validationsample.csv"
putStrLn "done loading data"
But due to lazyIO, this won't do what you think it does. trainingSample and validationSample are not yet fully evaluated. So your print statement will print almost immediately, and the first result will still take forever.
You can force readDigits to fully evaluate their return values, though, which will give you a better idea of how much time is spent there. You could either switch to using non-lazy IO, or just print something derived from the data:
readDigits :: String -> IO [Digit]
readDigits filename = do
fileContent <- readFile filename
putStr' $ filename ++ ": "
rows <- forM (tail $ lines fileContent) $ \line -> do
let xs = parseDigit line
putStr' $ case compare (sum $ pixels xs) 0 of
LT -> "-"
EQ -> "0"
GT -> "+"
return xs
putStrLn ""
return rows
where putStr' s = putStr s >> hFlush stdout
On my machine, this let me see that it took about 27 seconds to fully read the digits from trainingsample.csv.
This is printf-style profiling, which isn't great (much better to use a real profiler, or use criterion to benchmark various parts of your code), but good enough for these purposes.
That's clearly a major part of the slowdown, so it's worth trying to switch to strict io. Using Data.Text.IO.readFile, which is strict, cut it down to ~18 seconds.
UPDATE
Here's how to speed up your updated code:
Use unboxed vectors for Pixels (small win):
import qualified Data.Vector.Unboxed as U
-- ...
type Pixels = U.Vector Int
-- ...
distance :: Pixels -> Pixels -> Float
distance d1 d2 = sqrt . U.sum $ U.zipWith pointDistance d1 d2
where pointDistance a b = fromIntegral $ (a - b) * (a - b)
parseDigit :: T.Text -> Digit
parseDigit s = Digit label (U.fromList pixels)
where (label:pixels) = map toDigit $ T.splitOn (T.pack ",") s
toDigit s = either (\_ -> 0) fst (T.Read.decimal s)
Force the distance evaluation early by using seq (big win):
identify :: Digit -> V.Vector Digit -> (Digit, Float)
identify digit training = V.minimumBy (comparing snd) distances
where distances = V.map fn training
fn ref = let d = distance (pixels digit) (pixels ref) in d `seq` (ref, d)
On my machine, the whole program now runs in ~5s:
% ghc --make -O2 Main.hs
[1 of 1] Compiling Main ( Main.hs, Main.o )
Linking Main ...
% time ./Main
./Main 5.00s user 0.11s system 99% cpu 5.115 total
The thunks were killing you.

Your Vector's version, partially unboxed, adapted for ByteString and compiled with -O2 -fllvm runs in 8 seconds on my machine:
import Data.Ord
import Data.Maybe
import qualified Data.Vector as V
import qualified Data.Vector.Unboxed as U
import qualified Data.ByteString as B
import qualified Data.ByteString.Char8 as BC
type Pixels = U.Vector Int
data Digit = Digit { label :: !Int, pixels :: !Pixels }
distance :: Pixels -> Pixels -> Float
distance d1 d2 = sqrt . U.sum . U.zipWith pointDistance d1 $ d2
where pointDistance a b = fromIntegral $ (a - b) * (a - b)
parseDigit :: B.ByteString -> Digit
parseDigit bs =
let (label:pixels) = toIntegers bs []
in Digit label (U.fromList pixels)
where
toIntegers bs is =
let Just (i,bs') = BC.readInt bs
in if B.null bs' then reverse is else toIntegers (BC.tail bs') (i:is)
identify :: Digit -> V.Vector Digit -> (Digit, Float)
identify digit training = V.minimumBy (comparing snd) distances
where distances = V.map fn training
fn ref = (ref, distance (pixels digit) (pixels ref))
readDigits :: String -> IO (V.Vector Digit)
readDigits filename = do
fileContent <- B.readFile filename
return . V.map parseDigit . V.fromList . tail . BC.lines $ fileContent
main :: IO ()
main = do
trainingSample <- readDigits "trainingsample.csv"
validationSample <- readDigits "validationsample.csv"
let result = V.map (\d -> (d, identify d trainingSample)) validationSample
fmt (d, (ref, dist)) = putStrLn $ "Found " ++ show (label ref) ++ " for " ++ show (label d) ++ " (distance=" ++ show dist ++ ")"
V.mapM_ fmt result
Output of +RTS -s:
989,632,984 bytes allocated in the heap
19,875,368 bytes copied during GC
31,016,504 bytes maximum residency (5 sample(s))
22,748,608 bytes maximum slop
78 MB total memory in use (1 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 1761 colls, 0 par 0.05s 0.05s 0.0000s 0.0008s
Gen 1 5 colls, 0 par 0.00s 0.02s 0.0030s 0.0085s
INIT time 0.00s ( 0.00s elapsed)
MUT time 7.42s ( 7.69s elapsed)
GC time 0.05s ( 0.06s elapsed)
EXIT time 0.00s ( 0.01s elapsed)
Total time 7.47s ( 7.77s elapsed)
%GC time 0.7% (0.8% elapsed)
Alloc rate 133,419,569 bytes per MUT second
Productivity 99.3% of total user, 95.5% of total elapsed

Related

Why is recursion slower than filter with this Random number counter

I am writing a function that generates a million random numbers of 1 or 0 and then counts how many 0s were generated.
import System.Random
import Control.Monad
countZeros :: Int -> IO Int
countZeros n = (length . filter (==0)) <$> (replicateM n $ randomRIO (0,1 :: Int))
countZeros' :: Int -> IO Int
countZeros' n = go n 0
where
go :: Int -> Int -> IO Int
go x acc = do
r <- randomRIO (0,1 :: Int)
case x of
0 -> pure acc
_ -> let acc' = if r == 0 then succ acc else acc
in go (pred x) acc'
when I run the functions with an input of 1000000
>λ= countZeros 1000000
499716
(0.93 secs, 789,015,080 bytes)
>λ= countZeros' 1000000
500442
(2.02 secs, 1,109,569,560 bytes)
I don't understand why the prime function is twice as slow as the other. I assumed that they are essentially doing the same thing behind the scenes.
I am using GHCi.
What am I missing?
With bang patterns, and proper compilation with -O2, the "prime" function is faster:
{-# LANGUAGE BangPatterns #-}
module Main where
import System.Random
import Control.Monad
import System.Environment
countZeros :: Int -> IO Int
countZeros n = (length . filter (==0)) <$> (replicateM n $ randomRIO (0,1 :: Int))
countZeros' :: Int -> IO Int
countZeros' n = go n 0
where
go :: Int -> Int -> IO Int
go !x !acc = do
r <- randomRIO (0,1 :: Int)
case x of
0 -> pure acc
_ -> let acc' = if r == 0 then succ acc else acc
in go (pred x) acc'
main :: IO ()
main = do
[what] <- getArgs
let n = 1000 * 1000 * 10
fun = case what of
"1" -> countZeros
"2" -> countZeros'
_ -> error "arg not a number"
putStrLn "----"
print =<< fun n
putStrLn "----"
Compiled with
$ stack ghc -- RandomPerf.hs -O2 -Wall
$ stack ghc -- --version
The Glorious Glasgow Haskell Compilation System, version 8.6.3
Tests:
$ time ./RandomPerf.exe 1
----
4999482
----
real 0m3.329s
user 0m0.000s
sys 0m0.031s
$ time ./RandomPerf.exe 2
----
5001089
----
real 0m2.338s
user 0m0.000s
sys 0m0.046s
Repeating the tests gives comparable results, so this is not a fluke.
Result: the countZeros' function is significantly faster.
Using Criterion and running a proper benchmark is left as an exercise.
You probably used GHCi to assess performance, which prevents the optimizer to do its job. GHCi sacrifices proper optimization to load files faster, and be more usable in an interactive way.
These actually work in different ways from each other, at a level that matters. And both are slow.
The version using replicateM is bad because replicateM in IO can't stream its results. The entire list will be constructed at once, before filter and length get to start operating on it. The reason it's faster is that length is strict in its accumulator, so it doesn't generate a massive nested chain of thinks the way your other version does. And that's even worse for performance.
The recursive version doesn't use a strict accumulator. This means that the value it returns is a giant chain of nested thunks, holding on to all the generated entries and a bunch of indirect calls via list indexing. This is even more memory used than the filter version, because it's holding on to a bunch of closures as well as all the values. But even with that fixed, it would still be slow. Using !! just wrecks performance. It's recursive when a simple if would do the same job much more efficiently.

Memoized Collatz sequence

I've posted the same question in CodeReview but failed to get an answer. so I am trying my luck here in SO.
Here is one of my programs that utilized memoization and array to improve performance and memory usage. The performance seems satisfactory but the memory usage is ridiculous and I can't figure out what's wrong:
{-# LANGUAGE BangPatterns #-}
import Data.Functor
import Data.Array (Array)
import qualified Data.Array as Arr
import Control.DeepSeq
genColtzArr n = collatzArr
where collatzArr = Arr.array (1, n) $ take n $ map (\v -> (v, collatz v 0)) [1..]
collatz 1 !acc = 1 + acc
collatz !m !acc
| even m = go (m `div` 2) acc
| otherwise = go (3 * m + 1) acc
where go !l !acc
| l <= n = let !v = collatzArr Arr.! l in 1 + acc + v
| otherwise = collatz l $ 1 + acc
collatz here means this guy. This function is supposed to receive a number n, and then return an array indexing from 1 to n, and in which each cell contains the length of the link from the index to 1 by applying Collatz formula.
But the memory usage of this method is so high. Here is the profiler result (ghc option -prof -fprof-auto -rtsopts, run time option +RTS -p, n == 500000):
total alloc = 730,636,136 bytes (excludes profiling overheads)
COST CENTRE MODULE %time %alloc
genColtzArr.collatz Main 40.4 34.7
genColtzArr.collatz.go Main 25.5 14.4
COST CENTRE MODULE no. entries %time %alloc %time %alloc
genColtzArr Main 105 1 0.0 0.0 74.7 72.1
genColtzArr.collatzArr Main 106 1 8.0 20.8 74.7 72.1
genColtzArr.collatzArr.\ Main 107 500000 0.9 2.2 66.8 51.3
genColtzArr.collatz Main 109 1182582 40.4 34.7 65.9 49.1
genColtzArr.collatz.go Main 110 1182581 25.5 14.4 25.5 14.4
Please note that -O2 is not a desired answer. I want to figure out what's the problem in this program and in general, how should I spot time and memory inefficiencies in Haskell code. Specifically, I have no idea why this code, with tail recursion and bang pattern, can consume so much memory.
UPDATE1:
the same code with -s produces this:
1,347,869,264 bytes allocated in the heap
595,901,528 bytes copied during GC
172,105,056 bytes maximum residency (7 sample(s))
897,704 bytes maximum slop
315 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 2408 colls, 0 par 0.412s 0.427s 0.0002s 0.0075s
Gen 1 7 colls, 0 par 0.440s 0.531s 0.0759s 0.1835s
INIT time 0.000s ( 0.000s elapsed)
MUT time 0.828s ( 0.816s elapsed)
GC time 0.852s ( 0.958s elapsed)
RP time 0.000s ( 0.000s elapsed)
PROF time 0.000s ( 0.000s elapsed)
EXIT time 0.004s ( 0.017s elapsed)
Total time 1.684s ( 1.791s elapsed)
%GC time 50.6% (53.5% elapsed)
Alloc rate 1,627,861,429 bytes per MUT second
Productivity 49.4% of total user, 46.4% of total elapsed
so it takes 300 meg. that is still too large.
Update2
full code
{-# LANGUAGE BangPatterns #-}
import Data.Functor
import Data.Array (Array)
import qualified Data.Array as Arr
import Control.DeepSeq
genColtzArr n = collatzArr
where collatzArr = Arr.array (1, n) $ take n $ map (\v -> (v, collatz v 0)) [1..]
collatz 1 !acc = 1 + acc
collatz !m !acc
| even m = go (m `div` 2) acc
| otherwise = go (3 * m + 1) acc
where go !l !acc
| l <= n = let !v = collatzArr Arr.! l in 1 + acc + v
| otherwise = collatz l $ 1 + acc
genLongestArr n = Arr.array (1, n) llist
where colatz = genColtzArr n
llist = (1, 1):zipWith (\(n1, a1) l2 ->
let l1 = colatz Arr.! a1
in (n1 + 1, if l2 < l1 then a1 else n1 + 1))
llist (tail $ Arr.elems colatz)
main :: IO ()
main = getLine >> do
ns <- map read <$> lines <$> getContents
let m = maximum ns
let lar = genLongestArr m
let iter [] = return ()
iter (h:t) = (putStrLn $ show $ lar Arr.! h) >> iter t
iter ns
As the other answer on CodeReview hints, it's alright for a 500000-element boxed array to comsume ~20MB memory, however it's not only the array but a lot of things all together:
Although you put bang patterns every where, array initialization itself is a lazy foldr:
-- from GHC.Arr
array (l,u) ies
= let n = safeRangeSize (l,u)
in unsafeArray' (l,u) n
[(safeIndex (l,u) n i, e) | (i, e) <- ies]
unsafeArray' :: Ix i => (i,i) -> Int -> [(Int, e)] -> Array i e
unsafeArray' (l,u) n#(I# n#) ies = runST (ST $ \s1# ->
case newArray# n# arrEleBottom s1# of
(# s2#, marr# #) ->
foldr (fill marr#) (done l u n marr#) ies s2#)
So unless you evaluated the last bit of an array, it's holding reference to the list used in initialization. Usually the list can be GC'd on fly while you evaluating the array, but in your case the mutual references and self references disturbed the common GC pattern.
llist is self-referencing to produce every single element, so it will not be GC'd until you evaluated the last element of it
it also holds a reference to genColtzArr so genColtzArr won't be GC'd until llist is fully evaluated
you might think collatz is tail recursive but it's not, it's mutual recursive with collatzArr so again both of them won't be GC'd until fully evaluated
Everything combined, your program will keep three 500000-element list-like structures in memory and results ~80MB peak heap size.
Solution
The obvious solution is to force every array / list to normal form before it's used in another so you won't keep multiple copys of the same data in the memory.
genLongestArr :: Int -> Array Int Int
genLongestArr n =
let collatz = genColtzArr n
-- deepseq genColtzArr before mapping over it
-- this is equivalent to your recursive definition
in collatz `deepseq` (Arr.listArray (1,n) $ fmap fst $ scanl' (maxWith snd) (0, 0) $ Arr.assocs collatz)
maxWith :: Ord a => (b -> a) -> b -> b -> b
maxWith f b b' = case compare (f b) (f b') of
LT -> b'
_ -> b
And in main:
-- deepseq lar before mapping over it
-- this is equivalent to your iter loop
lar `deepseq` mapM_ (print . (lar Arr.!)) ns
Nothing can be done with genColtzArr, it's using itself for memorization so the mutual recursion is kind of necessary.
Now the heap graph peaks at ~20MB as it should:
(Disclaimer: All programs in this answer were compiled with -O0)

Profiling/Improving memory usage and/or GC time

Original
I'm trying to aggregate a CSV file and experiencing [what I consider to be] excessive memory usage and/or GC effort. The issue seems to arise when the number of groups increases. There is no problem when the keys are in the hundreds or thousands, but quickly starts spending a majority of time in the GC when the keys reach tens of thousands.
Update
Moving from Data.ByteString.Lazy.ByteString to Data.ByteString.Short.ShortByteString significantly reduced the memory consumption (to a level I think is reasonable). However, the amount of time spent in the GC still seems far higher than I would expect to be necessary. I moved from Data.HashMap.Strict.HashMap to Data.HashTable.ST.Basic.HashTable to see if the mutation in ST would help but it did not appear to. The following is the current full test code, including generateFile to create a test sample:
{-# LANGUAGE OverloadedStrings #-}
module Main where
import System.IO (withFile, IOMode(WriteMode))
import qualified System.Random as Random
import qualified Data.ByteString.Short as BSS
import qualified Data.ByteString.Lazy.Char8 as BL
import qualified Data.Vector as V
import qualified Data.Vector.Mutable as MV
import qualified Control.Monad.ST as ST
import qualified Data.HashTable.ST.Basic as HT
import qualified Data.HashTable.Class as HT (toList)
import Data.Hashable (Hashable, hashWithSalt)
import Data.List (unfoldr)
import qualified Data.Traversable as T
import Control.Monad (forM_)
instance Hashable a => Hashable (V.Vector a) where
hashWithSalt s = hashWithSalt s . V.toList
data CSVFormat = CSVFormat {
csvSeparator :: Char,
csvWrapper :: Char
}
readCSV :: CSVFormat -> Int -> FilePath -> IO [V.Vector BSS.ShortByteString]
readCSV format skip filepath = BL.readFile filepath >>= return . parseCSV format skip
parseCSV :: CSVFormat -> Int -> BL.ByteString -> [V.Vector BSS.ShortByteString]
parseCSV (CSVFormat sep wrp) skp = drop skp . unfoldr (\bs -> if BL.null bs then Nothing else Just (apfst V.fromList (parseLine bs)))
where
{-# INLINE apfst #-}
apfst f (x,y) = (f x,y)
{-# INLINE isCr #-}
isCr c = c == '\r'
{-# INLINE isLf #-}
isLf c = c == '\n'
{-# INLINE isSep #-}
isSep c = c == sep || isLf c || isCr c
{-# INLINE isWrp #-}
isWrp c = c == wrp
{-# INLINE parseLine #-}
parseLine :: BL.ByteString -> ([BSS.ShortByteString], BL.ByteString)
parseLine bs =
let (field,bs') = parseField bs in
case BL.uncons bs' of
Just (c,bs1)
| isLf c -> (field : [],bs1)
| isCr c ->
case BL.uncons bs1 of
Just (c,bs2) | isLf c -> (field : [],bs2)
_ -> (field : [],bs1)
| otherwise -> apfst (field :) (parseLine bs1)
Nothing -> (field : [],BL.empty)
{-# INLINE parseField #-}
parseField :: BL.ByteString -> (BSS.ShortByteString, BL.ByteString)
parseField bs =
case BL.uncons bs of
Just (c,bs')
| isWrp c -> apfst (BSS.toShort . BL.toStrict . BL.concat) (parseEscaped bs')
| otherwise -> apfst (BSS.toShort . BL.toStrict) (BL.break isSep bs)
Nothing -> (BSS.empty,BL.empty)
{-# INLINE parseEscaped #-}
parseEscaped :: BL.ByteString -> ([BL.ByteString], BL.ByteString)
parseEscaped bs =
let (chunk,bs') = BL.break isWrp bs in
case BL.uncons bs' of
Just (_,bs1) ->
case BL.uncons bs1 of
Just (c,bs2)
| isWrp c -> apfst (\xs -> chunk : BL.singleton wrp : xs) (parseEscaped bs2)
| otherwise -> (chunk : [],bs1)
Nothing -> (chunk : [],BL.empty)
Nothing -> error "EOF within quoted string"
aggregate :: [Int]
-> Int
-> [V.Vector BSS.ShortByteString]
-> [V.Vector BSS.ShortByteString]
aggregate groups size records =
let indices = [0..size - 1] in
ST.runST $ do
state <- HT.new
forM_ records (\record -> do
let key = V.fromList (map (\g -> record V.! g) groups)
existing <- HT.lookup state key
case existing of
Just x ->
forM_ indices (\i -> do
current <- MV.read x i
MV.write x i $! const current (record V.! i)
)
Nothing -> do
x <- MV.new size
forM_ indices (\i -> MV.write x i $! record V.! i)
HT.insert state key x
)
HT.toList state >>= T.traverse V.unsafeFreeze . map snd
filedata :: IO ([Int],Int,[V.Vector BSS.ShortByteString])
filedata = do
records <- readCSV (CSVFormat ',' '"') 1 "file.csv"
return ([0,1,2],18,records)
main :: IO ()
main = do
(key,len,records) <- filedata
print (length (aggregate key len records))
generateFile :: IO ()
generateFile = do
withFile "file.csv" WriteMode $ \handle -> do
forM_ [0..650000] $ \_ -> do
x <- BL.pack . show . truncate . (* 15 ) <$> (Random.randomIO :: IO Double)
y <- BL.pack . show . truncate . (* 50 ) <$> (Random.randomIO :: IO Double)
z <- BL.pack . show . truncate . (* 200) <$> (Random.randomIO :: IO Double)
BL.hPut handle (BL.intercalate "," (x:y:z:replicate 15 (BL.replicate 20 ' ')))
BL.hPut handle "\n"
I receive the following profiling result:
17,525,392,208 bytes allocated in the heap
27,394,021,360 bytes copied during GC
285,382,192 bytes maximum residency (129 sample(s))
3,714,296 bytes maximum slop
831 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 577 colls, 0 par 1.576s 1.500s 0.0026s 0.0179s
Gen 1 129 colls, 0 par 25.335s 25.663s 0.1989s 0.2889s
TASKS: 3 (1 bound, 2 peak workers (2 total), using -N1)
SPARKS: 0 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled)
INIT time 0.000s ( 0.002s elapsed)
MUT time 11.965s ( 23.939s elapsed)
GC time 15.148s ( 15.400s elapsed)
RP time 0.000s ( 0.000s elapsed)
PROF time 11.762s ( 11.763s elapsed)
EXIT time 0.000s ( 0.088s elapsed)
Total time 38.922s ( 39.429s elapsed)
Alloc rate 1,464,687,582 bytes per MUT second
Productivity 30.9% of total user, 30.5% of total elapsed
gc_alloc_block_sync: 0
whitehole_spin: 0
gen[0].sync: 0
gen[1].sync: 0
And the following heap visualization:
This turned out to be the V.! calls not being strict enough. Replacing them with indexM hugely reduced the memory consumption.

How to improve BitGet performance

I'm now developing binary parsing program using Haskell.
I currently found out that strict/lazy both BitGet seems to be very slow and
surprisingly allocate a lot of memory.
I tested below code (built with -O2), such as parsing entire bits in the input file, and
figure out the profiling result.
For this example, I used the 1,819,173 bytes binary file.
Strict version:
import Prelude as P
import System.Environment (getArgs)
import Data.ByteString as B
import Data.Binary.Strict.BitGet
coreFunc :: Int -> BitGet Int
coreFunc len = f len 0
where
f 0 r = return r
f l _ = do
b <- getBit
f (l - 1) $ if b then 1 else 0
mainFunc :: B.ByteString -> IO ()
mainFunc bs =
case runBitGet bs (coreFunc ((B.length bs) * 8)) of
Left emsg -> error emsg
Right r -> print $ show r
main :: IO ()
main = do
args <- getArgs
case args of
[] -> return ()
(x:_) -> (do
bs <- B.readFile x
mainFunc bs
return ()
)
-- profiling result --
total time = 1.74 secs (1741 ticks # 1000 us, 1 processor)
total alloc = 7,948,043,192 bytes (excludes profiling overheads)
Lazy version:
import Prelude as P
import System.Environment (getArgs)
import Data.ByteString.Lazy as B
import Data.Binary.Bits.Get
import Data.Binary.Get
import Data.Int (Int64)
coreFunc :: Int64 -> BitGet Int
coreFunc len = f len 0
where
f 0 r = return r
f l _ = do
b <- getBool
f (l - 1) $ if b then 1 else 0
mainFunc :: B.ByteString -> IO ()
mainFunc bs = do
let r = runGet (runBitGet (coreFunc ((B.length bs) * 8))) bs
print $ show r
main :: IO ()
main = do
args <- getArgs
case args of
[] -> return ()
(x:_) -> (do
bs <- B.readFile x
mainFunc bs
return ()
)
-- profiling result --
total time = 2.21 secs (2207 ticks # 1000 us, 1 processor)
total alloc = 6,405,531,680 bytes (excludes profiling overheads)
I want to ask that:
How can I improve this performance?
Can I profile inside of the BitGet library behavior?
Are there the other way to parse binary bits?
It seems like your coreFunc is supposed to skip forward some (len - 1) number of bits, then read a single bit as an 0 or 1 and return it in the BitGet monad. If that's the intent, something like this will be much more efficient.
I'm using the binary-bits package:
import Control.Applicative
import Data.Binary.Get
coreFunc :: Int -> Get Int
coreFunc len =
fromEnum <$> runBitGet (block (skip (len - 1) *> bool)
skip :: Int -> BitGet ()
skip n = byteString bytes *> word64be bits *> pure ()
where (bytes, bits) = quotRem n 8 -- sizeOf Word8
Unfortunately the package does not have a skip function to let us skip n bits, which the binary package it's based off includes, so I've had to write my own. It's possible a more efficient version could be written with access to the Block internals, but the library might already be optimizing it well enough that theres no benefit.
I'd like to benchmark this version against yours to get an accurate comparison, can you provide the binary file you use for testing?

Haskell More efficient way to parse file of lines of digits

So I have about a 8mb file of each with 6 ints seperated by a space.
my current method for parsing this is:
tuplify6 :: [a] -> (a, a, a, a, a, a)
tuplify6 [l, m, n, o, p, q] = (l, m, n, o, p, q)
toInts :: String -> (Int, Int, Int, Int, Int, Int)
toInts line =
tuplify6 $ map read stringNumbers
where stringNumbers = split " " line
and mapping toInts over
liftM lines . readFile
which will return me a list of tuples. However, When i run this, it takes nearly 25 seconds to load the file and parse it. Any way I can speed this up? The file is just plain text.
You can speed it up by using ByteStrings, e.g.
module Main (main) where
import System.Environment (getArgs)
import qualified Data.ByteString.Lazy.Char8 as C
import Data.Char
main :: IO ()
main = do
args <- getArgs
mapM_ doFile args
doFile :: FilePath -> IO ()
doFile file = do
bs <- C.readFile file
let tups = buildTups 0 [] $ C.dropWhile (not . isDigit) bs
print (length tups)
buildTups :: Int -> [Int] -> C.ByteString -> [(Int,Int,Int,Int,Int,Int)]
buildTups 6 acc bs = tuplify6 acc : buildTups 0 [] bs
buildTups k acc bs
| C.null bs = if k == 0 then [] else error ("Bad file format " ++ show k)
| otherwise = case C.readInt bs of
Just (i,rm) -> buildTups (k+1) (i:acc) $ C.dropWhile (not . isDigit) rm
Nothing -> error ("No Int found: " ++ show (C.take 100 bs))
tuplify6:: [a] -> (a, a, a, a, a, a)
tuplify6 [l, m, n, o, p, q] = (l, m, n, o, p, q)
runs pretty fast:
$ time ./fileParse IntList
200000
real 0m0.119s
user 0m0.115s
sys 0m0.003s
for an 8.1 MiB file.
On the other hand, using Strings and your conversion (with a couple of seqs to force evaluation) also took only 0.66s, so the bulk of the time seems to be spent not parsing, but working with the result.
Oops, missed a seq so the reads were not actually evaluated for the String version. Fixing that, String + read takes about four seconds, a bit above one with the custom Int parser from #Rotsor's comment
foldl' (\a c -> 10*a + fromEnum c - fromEnum '0') 0
so parsing apparently did take a significant amount of the time.

Resources