Efficient Conversion of Bytestring to [Word16] - haskell

I am attempting to do a plain conversion from a bytestring to a list of Word16s. The implementation below using Data.Binary.Get works, though it is a performance bottleneck in the code. This is understandable as IO is always going to be slow, but I was wondering if there isn't a more efficient way of doing this.
getImageData' = do
e <- isEmpty
if e then return []
else do
w <- getWord16be
ws <- getImageData'
return $ w : ws

I suspect the big problem you're encountering with Data.Binary.Get is that the decoders are inherently much too strict for your purpose. This also appears to be the case for serialise, and probably also the other serialization libraries. I think the fundamental trouble is that while you know that the operation will succeed as long as the ByteString has an even number of bytes, the library does not know that. So it has to read in the whole ByteString before it can conclude "Ah yes, all is well" and construct the list you've requested. Indeed, the way you're building the result, it's going to build a slew of closures (proportional in number to the length) before actually doing anything useful.
How can you fix this? Just use the bytestring library directly. The easiest thing is to use unpack, but I think you'll probably get slightly better performance like this:
{-# language BangPatterns #-}
module Wordy where
import qualified Data.ByteString as BS
import Data.ByteString (ByteString)
import Data.Word (Word16)
import Data.Maybe (fromMaybe)
import Data.Bits (unsafeShiftL)
toDBs :: ByteString -> Maybe [Word16]
toDBs bs0
| odd (BS.length bs0) = Nothing
| otherwise = Just (go bs0)
where
go bs = fromMaybe [] $ do
(b1, bs') <- BS.uncons bs
(b2, bs'') <- BS.uncons bs'
let !res = (fromIntegral b1 `unsafeShiftL` 8) + fromIntegral b2
Just (res : go bs'')

Related

Inverse of `Data.Text.Encoding.decodeLatin1`?

Is there a function f :: Text -> Maybe ByteString such that forall x:
f (decodeLatin1 x) == Just x
Note, decodeLatin1 has the signature:
decodeLatin1 :: ByteString -> Text
I'm concerned that encodeUtf8 is not what I want, as I'm guessing what it does is just dump the UTF-8 string out as a ByteString, not reverse the changes that decodeLatin1 made on the way in to characters in the upper half of the character set.
I understand that f has to return a Maybe, because in general there's Unicode characters that aren't in the Latin character set, but I just want this to round trip at least, in that if we start with a ByteString we should get back to it.
DISCLAIMER: consider this a long comment rather than a solution, because I haven't tested.
I think you can do it with witch library. It is a general purpose type converter library with a fair amount of type safety. There is a type class called TryFrom to perform conversion between types that might fail to cast.
Luckily witch provides conversions from/to encondings too, having an instance TryFrom Text (ISO_8859_1 ByteString), meaning that you can convert between Text and latin1 encoded ByteString. So I think (not tested!!) this should work
{-# LANGUAGE TypeApplications #-}
import Witch (tryInto, ISO_8859_1)
import Data.Tagged (Tagged(unTagged))
f :: Text -> Maybe ByteString
f s = case tryInto #(ISO_8859_1 ByteString) s of
Left err -> Nothing
Right bs -> Just (unTagged bs)
Notice that tryInto returns a Either TryFromException s, so if you want to handle errors you can do it with Either. Up to you.
Also, witch docs points out that this conversion is done via String type, so probably there is an out-of-the-box solution without the need of depending on witch package. I don't know such a solution, and looking to the source code hasn't helped
Edit:
Having read witch source code aparently this should work
import qualified Data.Text as T
import Data.Char (isLatin1)
import qualified Data.ByteString.Char8 as C
f :: Text -> Maybe ByteString
f t = if allCharsAreLatin then Just (C.pack str) else Nothing
where str = T.unpack t
allCharsAreLatin = all isLatin1 str
The latin1 encoding is pretty damn simple -- codepoint X maps to byte X, whenever that's in range of a byte. So just unpack and repack immediately.
import Control.Monad
import qualified Data.Text as T
import qualified Data.ByteString.Char8 as BS
latin1EncodeText :: T.Text -> Maybe BS.ByteString
latin1EncodeText t = BS.pack (T.unpack t) <$ guard (T.all (<'\256') t)
It's possible to avoid the intermediate String, but you should probably make sure this is your bottleneck before trying for that.

heap memory buildup with xml-conduit parseBytes

I'm parsing some rather large XML files with xml-conduit's streaming interface https://hackage.haskell.org/package/xml-conduit-1.8.0/docs/Text-XML-Stream-Parse.html#v:parseBytes but I'm seeing this memory buildup (here on a small test file):
where the top users are:
The actual data shouldn't take up that much heap – if I serialise and re-read, the resident memory use is kilobytes vs the megabytes here.
The minimal example I've managed to reproduce this with:
{-# LANGUAGE BangPatterns #-}
{-# LANGUAGE OverloadedStrings #-}
module Main where
import Control.Monad
import Control.Monad.IO.Class
import Data.Conduit
import Data.Conduit.Binary (sourceFile)
import qualified Data.Conduit.List as CL
import Data.Text (Text)
import Text.XML.Stream.Parse
type Y = [(Text, Text)]
main :: IO ()
main = do
res1 <- runConduitRes $
sourceFile "test.xml"
.| Text.XML.Stream.Parse.parseBytes def
.| parseMain
.| CL.foldM get []
print res1
get :: (MonadIO m, Show a) => [a] -> [a] -> m [a]
get acc !vals = do
liftIO $! print vals -- this oughta force it?
return $! take 1 vals ++ acc
parseMain = void $ tagIgnoreAttrs "Period" parseDetails
parseDetails = many parseParam >>= yield
parseParam = tag' "param" parseParamAttrs $ \idAttr -> do
value <- content
return (idAttr, value)
parseParamAttrs = do
idAttr <- requireAttr "id"
attr "name"
return idAttr
If I change get to just return ["hi"] or something, I don't get the buildup. So it seems the returned texts keep some reference to the larger text they were in (e.g. zero-copy slicing, cf. comment at https://hackage.haskell.org/package/text-0.11.2.0/docs/Data-Text.html#g:18 ), so the rest of the text can't be garbage collected even though we're using only little parts.
Our fix is to use Data.Text.copy on any attributes we want to yield:
someattr <- requireAttr "n"
yield (T.copy someattr)
which lets us parse with nearly constant memory use.
(And we might consider using https://markkarpov.com/post/short-bs-and-text.html#shorttext if we want to save even more memory.)

Haskell Conduit: having a Sink return a value based on the values from upstream

I've been trying to use the Conduit library to do some simple I/O involving files, but I'm having a hard time.
I have a text file containing nothing but a few digits such as 1234. I have a function that reads the file using readFile (no conduits), and returns Maybe Int (Nothing is returned when the file actually doesn't exist). I'm trying to write a version of this function that uses conduits, and I just can't figure it out.
Here is what I have:
import Control.Monad.Trans.Resource
import Data.Conduit
import Data.Functor
import System.Directory
import qualified Data.ByteString.Char8 as B
import qualified Data.Conduit.Binary as CB
import qualified Data.Conduit.Text as CT
import qualified Data.Text as T
myFile :: FilePath
myFile = "numberFile"
withoutConduit :: IO (Maybe Int)
withoutConduit = do
doesExist <- doesFileExist myFile
if doesExist
then Just . read <$> readFile myFile
else return Nothing
withConduit :: IO (Maybe Int)
withConduit = do
doesExist <- doesFileExist myFile
if doesExist
then runResourceT $ source $$ conduit =$ sink
else return Nothing
where
source :: Source (ResourceT IO) B.ByteString
source = CB.sourceFile myFile
conduit :: Conduit B.ByteString (ResourceT IO) T.Text
conduit = CT.decodeUtf8
sink :: Sink T.Text (ResourceT IO) (Maybe Int)
sink = awaitForever $ \txt -> let num = read . T.unpack $ txt :: Int
in -- I don't know what to do here...
Could someone please help me complete the sink function?
Thanks!
This isn't really a good example for where conduit actually provides a lot of value, at least not the way you're looking at it right now. Specifically, you're trying to use the read function, which requires that the entire value be in memory. Additionally, your current error handling behavior is a bit loose. Essentially, you're just going to get an read: no parse error if there's anything unexpected in the content.
However, there is a way we can play with this in conduit and be meaningful: by parsing the ByteString byte-by-byte ourselves and avoiding the read function. Fortunately, this pattern falls into a standard left fold, which the conduit-combinators package provides a perfect function for (element-wise left fold in a conduit, aka foldlCE):
{-# LANGUAGE OverloadedStrings #-}
import Conduit
import Data.Word8
import qualified Data.ByteString as S
sinkInt :: Monad m => Consumer S.ByteString m Int
sinkInt =
foldlCE go 0
where
go total w
| _0 <= w && w <= _9 =
total * 10 + (fromIntegral $ w - _0)
| otherwise = error $ "Invalid byte: " ++ show w
main :: IO ()
main = do
x <- yieldMany ["1234", "5678"] $$ sinkInt
print x
There are plenty of caveats that go along with this: it will simply throw an exception if there are unexpected bytes, and it doesn't handle integer overflow at all (though fixing that is just a matter of replacing Int with Integer). It's important to note that, since the in-memory string representation of a valid 32- or 64-bit int is always going to be tiny, conduit is overkill for this problem, though I hope that this code gives some guidance on how to generally write conduit code.

Rechunk a conduit into larger chunks using combinators

I am trying to construct a Conduit that receives as input ByteStrings (of around 1kb per chunk in size) and produces as output concatenated ByteStrings of 512kb chunks.
This seems like it should be simple to do, but I'm having a lot of trouble, most of the strategies I've tried using have only succeeded in dividing the chunks into smaller chunks, I haven't succeeded in concatenating larger chunks.
I started out trying isolate, then takeExactlyE and eventually conduitVector, but to no avail. Eventually I settled on this:
import qualified Data.Conduit as C
import qualified Data.Conduit.Combinators as C
import qualified Data.ByteString as B
import qualified Data.ByteString.Lazy as BL
chunksOfAtLeast :: Monad m => Int -> C.Conduit B.ByteString m BL.ByteString
chunksOfAtLeast chunkSize = loop BL.empty chunkSize
where
loop buffer n = do
mchunk <- C.await
case mchunk of
Nothing ->
-- Yield last remaining bytes
when (n < chunkSize) (C.yield buffer)
Just chunk -> do
-- Yield when the buffer has been filled and start over
let buffer' = buffer <> BL.fromStrict chunk
l = B.length chunk
if n <= l
then C.yield buffer' >> loop BL.empty chunkSize
else loop buffer' (n - l)
P.S. I decided not to split larger chunks for this function, but this was just a convenient simplification.
However, this seems very verbose given all the conduit functions that deal with chunking[1,2,3,4]. Please help! There must surely be a better way to do this using combinators, but I am missing some piece of intuition!
P.P.S. Is it ok to use lazy bytestring for the buffer as I've done? I'm a bit unclear about the internal representation for bytestring and whether this will help, especially since I'm using BL.length which I guess might evaluate the thunk anyway?
Conclusion
Just to elaborate on Michael's answer and comments, I ended up with this conduit:
import qualified Data.Conduit as C
import qualified Data.Conduit.Combinators as C
import qualified Data.ByteString as B
import qualified Data.ByteString.Lazy as BL
-- | "Strict" rechunk of a chunked conduit
chunksOfE' :: (MonadBase base m, PrimMonad base)
=> Int
-> C.Conduit ByteString m ByteString
chunksOfE' chunkSize = C.vectorBuilder chunkSize C.mapM_E =$= C.map fromByteVector
My understanding is that vectorBuilder will pay the cost for concatenating the smaller chunks early on, producing the aggregated chunks as strict bytestrings.
From what I can tell, an alternative implementation that produces lazy bytestring chunks (i.e. "chunked chunks") might be desirable when the aggregated chunks are very large and/or feed into a naturally streaming interface like a network socket. Here's my best attempt at the "lazy bytestring" version:
import qualified Data.Sequences.Lazy as SL
import qualified Data.Sequences as S
import qualified Data.Conduit.List as CL
-- | "Lazy" rechunk of a chunked conduit
chunksOfE :: (Monad m, SL.LazySequence lazy strict)
=> S.Index lazy
-> C.Conduit strict m lazy
chunksOfE chunkSize = CL.sequence C.sinkLazy =$= C.takeE chunkSize
How about this?
{-# LANGUAGE NoImplicitPrelude #-}
{-# LANGUAGE OverloadedStrings #-}
import ClassyPrelude.Conduit
chunksOfAtLeast :: Monad m => Int -> Conduit ByteString m LByteString
chunksOfAtLeast chunkSize =
loop
where
loop = do
lbs <- takeCE chunkSize =$= sinkLazy
unless (null lbs) $ do
yield lbs
loop
main :: IO ()
main =
yieldMany ["hello", "there", "world!"]
$$ chunksOfAtLeast 3
=$ mapM_C print
There are lots of other approaches that you could take depending on your goals. If you wanted to have a strict buffer, then using blaze-builder of vectorBuilder would make a lot of sense. But this keeps the same type signature you have already.

Building a histogram with haskell, many times slower than with python

I was going to test naive bayes classification. One part of it was going to be building a histogram of the training data. The problem is, I am using a large training data, the haskell-cafe mailing list since a couple of years back, and there are over 20k files in the folder.
It takes a while over two minutes to create the histogram with python, and a little over 8 minutes with haskell. I'm using Data.Map (insertWith'), enumerators and text. What else can I do to speed up the program?
Haskell:
import qualified Data.Text as T
import qualified Data.Text.IO as TI
import System.Directory
import Control.Applicative
import Control.Monad (filterM, foldM)
import System.FilePath.Posix ((</>))
import qualified Data.Map as M
import Data.Map (Map)
import Data.List (foldl')
import Control.Exception.Base (bracket)
import System.IO (Handle, openFile, hClose, hSetEncoding, IOMode(ReadMode), latin1)
import qualified Data.Enumerator as E
import Data.Enumerator (($$), (>==>), (<==<), (==<<), (>>==), ($=), (=$))
import qualified Data.Enumerator.List as EL
import qualified Data.Enumerator.Text as ET
withFile' :: (Handle -> IO c) -> FilePath -> IO c
withFile' f fp = do
bracket
(do
h ← openFile fp ReadMode
hSetEncoding h latin1
return h)
hClose
(f)
buildClassHistogram c = do
files ← filterM doesFileExist =<< map (c </> ) <$> getDirectoryContents c
foldM fileHistogram M.empty files
fileHistogram m file = withFile' (λh → E.run_ $ enumHist h) file
where
enumHist h = ET.enumHandle h $$ EL.fold (λm' l → foldl' (λm'' w → M.insertWith' (const (+1)) w 1 m'') m' $ T.words l) m
Python:
for filename in listdir(root):
filepath = root + "/" + filename
# print(filepath)
fp = open(filepath, "r", encoding="latin-1")
for word in fp.read().split():
if word in histogram:
histogram[word] = histogram[word]+1
else:
histogram[word] = 1
Edit: Added imports
You could try using imperative hash maps from the hashtables package: http://hackage.haskell.org/package/hashtables
I remember I once got a moderate speedup compared to Data.Map. I wouldn't expect anything spectacular though.
UPDATE
I simplified your python code so I could test it on a single big file (100 million lines):
import sys
histogram={}
for word in sys.stdin.readlines():
if word in histogram:
histogram[word] = histogram[word]+1
else:
histogram[word] = 1
print histogram.get("the")
Takes 6.06 seconds
Haskell translation using hashtables:
{-# LANGUAGE OverloadedStrings #-}
import qualified Data.ByteString.Char8 as T
import qualified Data.HashTable.IO as HT
main = do
ls <- T.lines `fmap` T.getContents
h <- HT.new :: IO (HT.BasicHashTable T.ByteString Int)
flip mapM_ ls $ \w -> do
r <- HT.lookup h w
case r of
Nothing -> HT.insert h w (1::Int)
Just c -> HT.insert h w (c+1)
HT.lookup h "the" >>= print
Run with a large allocation area: histogram +RTS -A500M
Takes 9.3 seconds, with 2.4% GC. Still quite a bit slower than Python but not too bad.
According to the GHC user guide, you can change the RTS options while compiling:
GHC lets you change the default RTS options for a program at compile
time, using the -with-rtsopts flag (Section 4.12.6, “Options affecting
linking”). A common use for this is to give your program a default
heap and/or stack size that is greater than the default. For example,
to set -H128m -K64m, link with -with-rtsopts="-H128m -K64m".
Your Haskell and Python implementations are using maps with different complexities. Python dictionaries are hash maps so the expected time for each operation (membership test, lookup, and insertion) is O(1). The Haskell version uses Data.Map which is a balanced binary search tree so the same operations take O(lg n) time. If you change your Haskell version to use a different map implementation, say a hash table or some sort of trie, it should get a lot quicker. However, I'm not familiar enough with the different modules implementing these data structures to say which is best. I'd start with the Data category on Hackage and look for one that you like. You might also look for a map that allows destructive updates like STArray does.
We need more information:
How long does it take both programs to process the words from the input, with no data structure for maintaining counts?
How many distinct words are there, so we can judge whether the extra log N cost for balanced trees is a consideration?
What does GHC's profiler say? In particular, how much time is spent in allocation? It's possible that the Haskell version is spending most of its time allocating tree nodes that quickly become obsolete.
UPDATE: I missed that lowercase "text" might mean Data.Text. You may be comparing applies and oranges. Python's Latin1 encoding uses one byte per char. Although it tries to be efficient, Data.Text must allow for the possiblity of more than 256 characters. What happens if you switch to String, or better, Data.ByteString?
Depending on what these indicators say, here are a couple of things to try:
If analyzing the input is a bottleneck, try driving all your I/O and analysis from Data.ByteString instead of Text.
If the data structure is a bottleneck, Bentley and Sedgewick's ternary search trees are purely functional but perform competetively with hash tables. There is a TernaryTrees package on Hackage.

Resources