Why my Haskell program using `par` isn't spawning any spark? - haskell

I'm trying to compile the following program using threads:
import qualified Data.Vector.Unboxed as UV
passes = UV.fromList [1..1000000] :: UV.Vector Int
vector = UV.fromList [1..100] :: UV.Vector Double
vectors = x `par` y `par` z `par` w `pseq` (x,y,z,w) where
x = UV.foldr' (const (UV.map (+1))) vector passes
y = UV.foldr' (const (UV.map (+2))) vector passes
z = UV.foldr' (const (UV.map (+3))) vector passes
w = UV.foldr' (const (UV.map (+4))) vector passes
main = print vectors
But it does not look like it is executing in parallel, since its execution time with either N1 or N4 are almost the same.
vh:haskell apple1$ ghc -fforce-recomp -threaded -O2 bench.hs -o bench; time ./bench +RTS -s -N4
[1 of 1] Compiling Main ( bench.hs, bench.o )
Linking bench ...
(fromList [1000001.0,1000002.0,1000003.0,1000004.0,1000005.0,1000006.0,1000007.0,1000008.0,1000009.0,1000010.0,1000011.0,1000012.0,1000013.0,1000014.0,1000015.0,1000016.0,1000017.0,1000018.0,1000019.0,1000020.0,1000021.0,1000022.0,1000023.0,1000024.0,1000025.0,1000026.0,1000027.0,1000028.0,1000029.0,1000030.0,1000031.0,1000032.0,1000033.0,1000034.0,1000035.0,1000036.0,1000037.0,1000038.0,1000039.0,1000040.0,1000041.0,1000042.0,1000043.0,1000044.0,1000045.0,1000046.0,1000047.0,1000048.0,1000049.0,1000050.0,1000051.0,1000052.0,1000053.0,1000054.0,1000055.0,1000056.0,1000057.0,1000058.0,1000059.0,1000060.0,1000061.0,1000062.0,1000063.0,1000064.0,1000065.0,1000066.0,1000067.0,1000068.0,1000069.0,1000070.0,1000071.0,1000072.0,1000073.0,1000074.0,1000075.0,1000076.0,1000077.0,1000078.0,1000079.0,1000080.0,1000081.0,1000082.0,1000083.0,1000084.0,1000085.0,1000086.0,1000087.0,1000088.0,1000089.0,1000090.0,1000091.0,1000092.0,1000093.0,1000094.0,1000095.0,1000096.0,1000097.0,1000098.0,1000099.0,1000100.0],fromList [2000001.0,2000002.0,2000003.0,2000004.0,2000005.0,2000006.0,2000007.0,2000008.0,2000009.0,2000010.0,2000011.0,2000012.0,2000013.0,2000014.0,2000015.0,2000016.0,2000017.0,2000018.0,2000019.0,2000020.0,2000021.0,2000022.0,2000023.0,2000024.0,2000025.0,2000026.0,2000027.0,2000028.0,2000029.0,2000030.0,2000031.0,2000032.0,2000033.0,2000034.0,2000035.0,2000036.0,2000037.0,2000038.0,2000039.0,2000040.0,2000041.0,2000042.0,2000043.0,2000044.0,2000045.0,2000046.0,2000047.0,2000048.0,2000049.0,2000050.0,2000051.0,2000052.0,2000053.0,2000054.0,2000055.0,2000056.0,2000057.0,2000058.0,2000059.0,2000060.0,2000061.0,2000062.0,2000063.0,2000064.0,2000065.0,2000066.0,2000067.0,2000068.0,2000069.0,2000070.0,2000071.0,2000072.0,2000073.0,2000074.0,2000075.0,2000076.0,2000077.0,2000078.0,2000079.0,2000080.0,2000081.0,2000082.0,2000083.0,2000084.0,2000085.0,2000086.0,2000087.0,2000088.0,2000089.0,2000090.0,2000091.0,2000092.0,2000093.0,2000094.0,2000095.0,2000096.0,2000097.0,2000098.0,2000099.0,2000100.0],fromList [3000001.0,3000002.0,3000003.0,3000004.0,3000005.0,3000006.0,3000007.0,3000008.0,3000009.0,3000010.0,3000011.0,3000012.0,3000013.0,3000014.0,3000015.0,3000016.0,3000017.0,3000018.0,3000019.0,3000020.0,3000021.0,3000022.0,3000023.0,3000024.0,3000025.0,3000026.0,3000027.0,3000028.0,3000029.0,3000030.0,3000031.0,3000032.0,3000033.0,3000034.0,3000035.0,3000036.0,3000037.0,3000038.0,3000039.0,3000040.0,3000041.0,3000042.0,3000043.0,3000044.0,3000045.0,3000046.0,3000047.0,3000048.0,3000049.0,3000050.0,3000051.0,3000052.0,3000053.0,3000054.0,3000055.0,3000056.0,3000057.0,3000058.0,3000059.0,3000060.0,3000061.0,3000062.0,3000063.0,3000064.0,3000065.0,3000066.0,3000067.0,3000068.0,3000069.0,3000070.0,3000071.0,3000072.0,3000073.0,3000074.0,3000075.0,3000076.0,3000077.0,3000078.0,3000079.0,3000080.0,3000081.0,3000082.0,3000083.0,3000084.0,3000085.0,3000086.0,3000087.0,3000088.0,3000089.0,3000090.0,3000091.0,3000092.0,3000093.0,3000094.0,3000095.0,3000096.0,3000097.0,3000098.0,3000099.0,3000100.0],fromList [4000001.0,4000002.0,4000003.0,4000004.0,4000005.0,4000006.0,4000007.0,4000008.0,4000009.0,4000010.0,4000011.0,4000012.0,4000013.0,4000014.0,4000015.0,4000016.0,4000017.0,4000018.0,4000019.0,4000020.0,4000021.0,4000022.0,4000023.0,4000024.0,4000025.0,4000026.0,4000027.0,4000028.0,4000029.0,4000030.0,4000031.0,4000032.0,4000033.0,4000034.0,4000035.0,4000036.0,4000037.0,4000038.0,4000039.0,4000040.0,4000041.0,4000042.0,4000043.0,4000044.0,4000045.0,4000046.0,4000047.0,4000048.0,4000049.0,4000050.0,4000051.0,4000052.0,4000053.0,4000054.0,4000055.0,4000056.0,4000057.0,4000058.0,4000059.0,4000060.0,4000061.0,4000062.0,4000063.0,4000064.0,4000065.0,4000066.0,4000067.0,4000068.0,4000069.0,4000070.0,4000071.0,4000072.0,4000073.0,4000074.0,4000075.0,4000076.0,4000077.0,4000078.0,4000079.0,4000080.0,4000081.0,4000082.0,4000083.0,4000084.0,4000085.0,4000086.0,4000087.0,4000088.0,4000089.0,4000090.0,4000091.0,4000092.0,4000093.0,4000094.0,4000095.0,4000096.0,4000097.0,4000098.0,4000099.0,4000100.0])
3,842,955,664 bytes allocated in the heap
16,390,368 bytes copied during GC
8,469,360 bytes maximum residency (6 sample(s))
2,122,880 bytes maximum slop
24 MB total memory in use (7 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 7411 colls, 7411 par 0.20s 0.06s 0.0000s 0.0004s
Gen 1 6 colls, 5 par 0.00s 0.00s 0.0002s 0.0008s
Parallel GC work balance: 1.00% (serial 0%, perfect 100%)
TASKS: 10 (1 bound, 9 peak workers (9 total), using -N4)
SPARKS: 3 (0 converted, 0 overflowed, 0 dud, 3 GC'd, 0 fizzled)
INIT time 0.00s ( 0.00s elapsed)
MUT time 0.77s ( 0.69s elapsed)
GC time 0.20s ( 0.06s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 0.97s ( 0.76s elapsed)
Alloc rate 5,018,806,918 bytes per MUT second
Productivity 78.9% of total user, 101.5% of total elapsed
gc_alloc_block_sync: 6988
whitehole_spin: 0
gen[0].sync: 0
gen[1].sync: 0
real 0m0.759s
user 0m0.972s
sys 0m0.158s
vh:haskell apple1$ time ./bench
(fromList [1000001.0,1000002.0,1000003.0,1000004.0,1000005.0,1000006.0,1000007.0,1000008.0,1000009.0,1000010.0,1000011.0,1000012.0,1000013.0,1000014.0,1000015.0,1000016.0,1000017.0,1000018.0,1000019.0,1000020.0,1000021.0,1000022.0,1000023.0,1000024.0,1000025.0,1000026.0,1000027.0,1000028.0,1000029.0,1000030.0,1000031.0,1000032.0,1000033.0,1000034.0,1000035.0,1000036.0,1000037.0,1000038.0,1000039.0,1000040.0,1000041.0,1000042.0,1000043.0,1000044.0,1000045.0,1000046.0,1000047.0,1000048.0,1000049.0,1000050.0,1000051.0,1000052.0,1000053.0,1000054.0,1000055.0,1000056.0,1000057.0,1000058.0,1000059.0,1000060.0,1000061.0,1000062.0,1000063.0,1000064.0,1000065.0,1000066.0,1000067.0,1000068.0,1000069.0,1000070.0,1000071.0,1000072.0,1000073.0,1000074.0,1000075.0,1000076.0,1000077.0,1000078.0,1000079.0,1000080.0,1000081.0,1000082.0,1000083.0,1000084.0,1000085.0,1000086.0,1000087.0,1000088.0,1000089.0,1000090.0,1000091.0,1000092.0,1000093.0,1000094.0,1000095.0,1000096.0,1000097.0,1000098.0,1000099.0,1000100.0],fromList [2000001.0,2000002.0,2000003.0,2000004.0,2000005.0,2000006.0,2000007.0,2000008.0,2000009.0,2000010.0,2000011.0,2000012.0,2000013.0,2000014.0,2000015.0,2000016.0,2000017.0,2000018.0,2000019.0,2000020.0,2000021.0,2000022.0,2000023.0,2000024.0,2000025.0,2000026.0,2000027.0,2000028.0,2000029.0,2000030.0,2000031.0,2000032.0,2000033.0,2000034.0,2000035.0,2000036.0,2000037.0,2000038.0,2000039.0,2000040.0,2000041.0,2000042.0,2000043.0,2000044.0,2000045.0,2000046.0,2000047.0,2000048.0,2000049.0,2000050.0,2000051.0,2000052.0,2000053.0,2000054.0,2000055.0,2000056.0,2000057.0,2000058.0,2000059.0,2000060.0,2000061.0,2000062.0,2000063.0,2000064.0,2000065.0,2000066.0,2000067.0,2000068.0,2000069.0,2000070.0,2000071.0,2000072.0,2000073.0,2000074.0,2000075.0,2000076.0,2000077.0,2000078.0,2000079.0,2000080.0,2000081.0,2000082.0,2000083.0,2000084.0,2000085.0,2000086.0,2000087.0,2000088.0,2000089.0,2000090.0,2000091.0,2000092.0,2000093.0,2000094.0,2000095.0,2000096.0,2000097.0,2000098.0,2000099.0,2000100.0],fromList [3000001.0,3000002.0,3000003.0,3000004.0,3000005.0,3000006.0,3000007.0,3000008.0,3000009.0,3000010.0,3000011.0,3000012.0,3000013.0,3000014.0,3000015.0,3000016.0,3000017.0,3000018.0,3000019.0,3000020.0,3000021.0,3000022.0,3000023.0,3000024.0,3000025.0,3000026.0,3000027.0,3000028.0,3000029.0,3000030.0,3000031.0,3000032.0,3000033.0,3000034.0,3000035.0,3000036.0,3000037.0,3000038.0,3000039.0,3000040.0,3000041.0,3000042.0,3000043.0,3000044.0,3000045.0,3000046.0,3000047.0,3000048.0,3000049.0,3000050.0,3000051.0,3000052.0,3000053.0,3000054.0,3000055.0,3000056.0,3000057.0,3000058.0,3000059.0,3000060.0,3000061.0,3000062.0,3000063.0,3000064.0,3000065.0,3000066.0,3000067.0,3000068.0,3000069.0,3000070.0,3000071.0,3000072.0,3000073.0,3000074.0,3000075.0,3000076.0,3000077.0,3000078.0,3000079.0,3000080.0,3000081.0,3000082.0,3000083.0,3000084.0,3000085.0,3000086.0,3000087.0,3000088.0,3000089.0,3000090.0,3000091.0,3000092.0,3000093.0,3000094.0,3000095.0,3000096.0,3000097.0,3000098.0,3000099.0,3000100.0],fromList [4000001.0,4000002.0,4000003.0,4000004.0,4000005.0,4000006.0,4000007.0,4000008.0,4000009.0,4000010.0,4000011.0,4000012.0,4000013.0,4000014.0,4000015.0,4000016.0,4000017.0,4000018.0,4000019.0,4000020.0,4000021.0,4000022.0,4000023.0,4000024.0,4000025.0,4000026.0,4000027.0,4000028.0,4000029.0,4000030.0,4000031.0,4000032.0,4000033.0,4000034.0,4000035.0,4000036.0,4000037.0,4000038.0,4000039.0,4000040.0,4000041.0,4000042.0,4000043.0,4000044.0,4000045.0,4000046.0,4000047.0,4000048.0,4000049.0,4000050.0,4000051.0,4000052.0,4000053.0,4000054.0,4000055.0,4000056.0,4000057.0,4000058.0,4000059.0,4000060.0,4000061.0,4000062.0,4000063.0,4000064.0,4000065.0,4000066.0,4000067.0,4000068.0,4000069.0,4000070.0,4000071.0,4000072.0,4000073.0,4000074.0,4000075.0,4000076.0,4000077.0,4000078.0,4000079.0,4000080.0,4000081.0,4000082.0,4000083.0,4000084.0,4000085.0,4000086.0,4000087.0,4000088.0,4000089.0,4000090.0,4000091.0,4000092.0,4000093.0,4000094.0,4000095.0,4000096.0,4000097.0,4000098.0,4000099.0,4000100.0])
real 0m0.717s
user 0m0.694s
sys 0m0.021s
Why it is not running in parallel and how do I fix it?

I think it takes a while for sparks to get processed after they are created, so if you don't have a lot of work to do they are likely to be GC'ed or fizzle out.
Consider this program:
import Control.Parallel.Strategies
import System.Environment
fib :: Int -> Int
fib n
| n <= 2 = n
| otherwise = fib (n-2) + fib (n-1)
test4 n = runEval $ do
a <- rpar (fib n)
b <- rpar (fib (n+1))
c <- rpar (fib (n+2))
d <- rpar (fib (n+3))
return (a,b,c,d)
main = do
n <- fmap (read.head) getArgs
print $ test4 n
and here is a summary of what I typically see for various values of n:
n sparks
---------- ------------------------
up to 25 either 4 GC or 4 fizzled
26 1 converted, 3 fizzled
28 2 converted, 2 fizzled
30 3 converted, 1 fizzled
In each case 4 sparks get created, but for smaller values of n the spark manager doesn't have any time to evaluate any of them and they all get evaluated by the main thread.

There certainly is a trick to get sparks to run. Here is another approach using parMap:
import Control.Monad.Par
import qualified Data.Vector.Unboxed as UV
passes = UV.fromList [1..1000000] :: UV.Vector Int
vector = UV.fromList [1..100] :: UV.Vector Double
test = runPar $ parMap go [1..4]
where go k = UV.foldr' (const (UV.map (+k))) vector passes
main = print test
This does not create any sparks, but the code runs in parallel. Profiling stats shows a Total time of 1.63s (elapsed 0.93s).
Threadscope is very handy for observing HEC activity. Compile with:
ghc -O2 -threaded -eventlog -rtsopts ...
and run with:
./prog +RTS -N... -l
to generate the event log file to use with Threadscope.

Related

Finding the size of a list that's too big for memory?

Brand new Haskell programmer here. Just finished "Learn you a Haskell"... I'm interested in how large a set is that has some particular properties. I have working code for some small parameter values, but I'd like to know how to deal with larger structures. I know Haskell can do "infinite data structures" but I'm just not seeing how to get there from where I'm at and Learn You a Haskell / Google isn't getting me over this.
Here's the working code for my eSet given "small" arguments r and t
import Control.Monad
import System.Environment
import System.Exit
myPred :: [Int] -> Bool
myPred a = myPred' [] a
where
myPred' [] [] = False
myPred' [] [0] = True
myPred' _ [] = True
myPred' acc (0:aTail) = myPred' acc aTail
myPred' acc (a:aTail)
| a `elem` acc = False
| otherwise = myPred' (a:acc) aTail
superSet :: Int -> Int -> [[Int]]
superSet r t = replicateM r [0..t]
eSet :: Int -> Int -> [[Int]]
eSet r t = filter myPred $ superSet r t
main :: IO ()
main = do
args <- getArgs
case args of
[rArg, tArg] ->
print $ length $ eSet (read rArg) (read tArg)
[rArg, tArg, "set"] ->
print $ eSet (read rArg) (read tArg)
_ ->
die "Usage: eSet r r set <set optional for printing set itself otherwise just print the size
When compiled/run I get
$ ghc eSet.hs -rtsopts
[1 of 1] Compiling Main ( eSet.hs, eSet.o )
Linking eSet ...
$ # Here's is a tiny eSet to illustrate. It is the set of lists of r integers from zero to t with no repeated nonzero list entries
$ ./eSet 4 2 set
[[0,0,0,0],[0,0,0,1],[0,0,0,2],[0,0,1,0],[0,0,1,2],[0,0,2,0],[0,0,2,1],[0,1,0,0],[0,1,0,2],[0,1,2,0],[0,2,0,0],[0,2,0,1],[0,2,1,0],[1,0,0,0],[1,0,0,2],[1,0,2,0],[1,2,0,0],[2,0,0,0],[2,0,0,1],[2,0,1,0],[2,1,0,0]]
$ ./eSet 8 4 +RTS -sstderr
3393
174,406,136 bytes allocated in the heap
29,061,152 bytes copied during GC
4,382,568 bytes maximum residency (7 sample(s))
148,664 bytes maximum slop
14 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 328 colls, 0 par 0.047s 0.047s 0.0001s 0.0009s
Gen 1 7 colls, 0 par 0.055s 0.055s 0.0079s 0.0147s
INIT time 0.000s ( 0.000s elapsed)
MUT time 0.298s ( 0.301s elapsed)
GC time 0.102s ( 0.102s elapsed)
EXIT time 0.001s ( 0.001s elapsed)
Total time 0.406s ( 0.405s elapsed)
%GC time 25.1% (25.2% elapsed)
Alloc rate 585,308,888 bytes per MUT second
Productivity 74.8% of total user, 75.0% of total elapsed
$ ./eSet 10 5 +RTS -sstderr
63591
27,478,010,744 bytes allocated in the heap
4,638,903,384 bytes copied during GC
532,163,096 bytes maximum residency (15 sample(s))
16,500,072 bytes maximum slop
1556 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 52656 colls, 0 par 6.865s 6.864s 0.0001s 0.0055s
Gen 1 15 colls, 0 par 8.853s 8.997s 0.5998s 1.8617s
INIT time 0.000s ( 0.000s elapsed)
MUT time 52.652s ( 52.796s elapsed)
GC time 15.717s ( 15.861s elapsed)
EXIT time 0.193s ( 0.211s elapsed)
Total time 68.564s ( 68.868s elapsed)
%GC time 22.9% (23.0% elapsed)
Alloc rate 521,883,277 bytes per MUT second
Productivity 77.1% of total user, 76.7% of total elapsed
I see my memory usage is very high and there's a lot of garbage collecting. When running eSet 12 6 I get a Segmentation fault.
I feel like filter myPred $ superSet r t is keeping me from lazily making the subset one part at a time so I can deal with much larger (but finite) sets, but I don't know how to change to another approach that would do that. I think that's the root of my question.
Also, as this is my first Haskell program, points on style and how to achieve the Haskell analog of "pythonic" are much appreciated!
I suspect the culprit here is replicateM, which has this implementation:
replicateM cnt0 f =
loop cnt0
where
loop cnt
| cnt <= 0 = pure []
| otherwise = liftA2 (:) f (loop (cnt - 1))
The problem line is liftA2 (:) f (loop (cnt - 1)); probably loop (cnt - 1) is getting shared among all the calls to (:) partially applied to elements of f, and so loop (cnt - 1) must be kept in memory. Unfortunately loop (cnt - 1) is quite a long list...
It can be a bit fiddly to convince GHC not to share something. The following redefinition of superSet gives me a nice flat memory usage; it will probably be a bit slower on examples that do fit in memory, of course. The key idea is to make it look to the untrained eye (i.e. GHC) like the recursive monadic action depends on the choices made earlier, even though it doesn't.
superSet :: Int -> Int -> [[Int]]
superSet r t = go r 0 where
go 0 ignored = if ignored == 0 then [[]] else [[]]
go r ignored = do
x <- [0..t]
xs <- go (r-1) (ignored+x)
return (x:xs)
If you don't mind avoiding optimizations, the more natural definition also works:
superSet 0 t = [[]]
superSet r t = do
x <- [0..t]
xs <- superSet (r-1) t
return (x:xs)
...but with -O2 GHC is too clever and notices that it can share the recursive call.
A completely alternate approach is to just do a little bit of combinatorics analysis. Here's the process that eSet r t does, as near as I can make out:
Choose at most r elements without replacement from a set of size t.
Pad the sequence to a length of r by interleaving a sentinel value.
So let's just count the ways of doing each of these steps, rather than actually performing them. We'll introduce a new parameter, s, which is the length of the sequence produced by step (1) (and which we therefore know has s <= r and s <= t). How many sequences of size s are there when drawing elements without replacement from a set of size t? Well, there are t choices for the first element, t-1 choices for the second element, t-2 choices for the third element, ...
-- sequencesWithoutReplacement is a very long name!
seqWORepSize :: Integer -> Integer -> Integer
seqWORepSize s t = product [t-s+1 .. t]
Then we want to pad the sequence out to a length of r. We're going to choose s positions in the r-long sequence to be drawn from our sequence, and the remainder will be sentinels. How many ways are there to do that? This one is a well-known combinatorics operator called choose.
choose :: Integer -> Integer -> Integer
choose r s = product [r-s+1 .. r] `div` product [2 .. s]
The number of ways to produce a padded sequence of a given length is just the product of these two numbers, since the choices of "what values to insert" and "where to insert values" can be made completely independently.
paddedSeqSize :: Integer -> Integer -> Integer -> Integer
paddedSeqSize r s t = seqWORepSize s t * (r `choose` s)
And now we're pretty much done. Just iterate over all possible sequence lengths and add up the paddedSeqSize.
eSetSize :: Integer -> Integer -> Integer
eSetSize r t = sum $ map (\s -> paddedSeqSize r s t) [0..r]
We can try it out in ghci:
> :set +s
> map length $ [eSet 1 1, eSet 4 4, eSet 6 4, eSet 4 6]
[2,209,1045,1045]
(0.13 secs, 26,924,264 bytes)
> [eSetSize 1 1, eSetSize 4 4, eSetSize 6 4, eSetSize 4 6]
[2,209,1045,1045]
(0.01 secs, 120,272 bytes)
This way is significantly faster and significantly more memory-friendly. Indeed, we can make queries and get answers about eSets that we would never be able to count the size of one-by-one, e.g.
> length . show $ eSetSize 1000 1000
2594
(0.26 secs, 909,746,448 bytes)
Good luck counting to 10^2594 one at a time. =P
Edit
This idea can also be adapted to produce the padded sequences themselves rather than just counting how many there are. First, a handy utility that I find myself defining over and over for picking out individual elements of a list and reporting on the leftovers:
zippers :: [a] -> [([a], a, [a])]
zippers = go [] where
go ls [] = []
go ls (h:rs) = (ls, h, rs) : go (h:ls) rs
Now, the sequences without replacement can be done by repeatedly choosing a single element from the leftovers.
seqWORep :: Int -> [a] -> [[a]]
seqWORep 0 _ = [[]]
seqWORep n xs = do
(ls, y, rs) <- zippers xs
ys <- seqWORep (n-1) (ls++rs)
return (y:ys)
Once we have a sequence, we can pad it to a desired size by producing all the interleavings of the sentinel value as follows:
interleavings :: Int -> a -> [a] -> [[a]]
interleavings 0 _ xs = [xs]
interleavings n z [] = [replicate n z]
interleavings n z xs#(x:xt) = map (z:) (interleavings (n-1) z xs)
++ map (x:) (interleavings n z xt)
And finally, the top-level function just delegates to seqWORep and interleavings.
eSet :: Int -> Int -> [[Int]]
eSet r t = do
s <- [0..r]
xs <- seqWORep s [1..t]
interleavings (r-s) 0 xs
In my tests this modified eSet has nice flat memory usage both with and without optimizations; does not generate any spurious elements that need to be later filtered out, and so is faster than your original proposal; and to me looks like quite a natural definition compared to the answer that relies on tricking GHC. A nice collection of properties!
After re-reading parts of LYaH and thinking about #daniel-wagners answer monadically composing sounded like it would be worthwhile to try again. The new code total memory is flat and works with and without the -O2 optimization.
Source:
import Control.Monad
import System.Environment
import System.Exit
allowed :: [Int] -> Bool
allowed a = allowed' [] a
where
allowed' [ ] [ ] = False
allowed' [ ] [0] = True
allowed' _ [ ] = True
allowed' acc (0:aTail) = allowed' acc aTail
allowed' acc (a:aTail)
| a `elem` acc = False
| otherwise = allowed' (a:acc) aTail
branch :: Int -> [Int] -> [[Int]]
branch t x = filter allowed [n:x | n <- [0..t]]
eSet :: Int -> Int -> [[Int]]
eSet r t = return [] >>= foldr (<=<) return (replicate r (branch t))
main :: IO ()
main = do
args <- getArgs
case args of
[rArg, tArg] ->
print $ length $ eSet (read rArg) (read tArg)
[rArg, tArg, "set"] ->
print $ eSet (read rArg) (read tArg)
_ -> die "Usage: eSet r r set <set optional>"
The version with monadic function composition tests much faster and without the memory issues...
$ ./eSetMonad 10 5 +RTS -sstderr
63591
289,726,000 bytes allocated in the heap
997,968 bytes copied during GC
63,360 bytes maximum residency (2 sample(s))
24,704 bytes maximum slop
2 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 553 colls, 0 par 0.008s 0.008s 0.0000s 0.0002s
Gen 1 2 colls, 0 par 0.000s 0.000s 0.0002s 0.0003s
INIT time 0.000s ( 0.000s elapsed)
MUT time 0.426s ( 0.429s elapsed)
GC time 0.009s ( 0.009s elapsed)
EXIT time 0.000s ( 0.000s elapsed)
Total time 0.439s ( 0.438s elapsed)
%GC time 2.0% (2.0% elapsed)
Alloc rate 680,079,893 bytes per MUT second
Productivity 98.0% of total user, 98.3% of total elapsed

Word foldl' isn't optimized as well as Int foldl'

import Data.List
test :: Int -> Int
test n = foldl' (+) 0 [1..n]
main :: IO ()
main = do
print $ test $ 10^8
GHC optimizes the above code to the point that the garbage collector doesn't even have to do anything:
$ ghc -rtsopts -O2 testInt && ./testInt +RTS -s
[1 of 1] Compiling Main ( testInt.hs, testInt.o )
Linking testInt ...
5000000050000000
51,752 bytes allocated in the heap
3,480 bytes copied during GC
44,384 bytes maximum residency (1 sample(s))
17,056 bytes maximum slop
1 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 0 colls, 0 par 0.000s 0.000s 0.0000s 0.0000s
Gen 1 1 colls, 0 par 0.000s 0.000s 0.0001s 0.0001s
INIT time 0.000s ( 0.000s elapsed)
MUT time 0.101s ( 0.101s elapsed)
GC time 0.000s ( 0.000s elapsed)
EXIT time 0.000s ( 0.000s elapsed)
Total time 0.103s ( 0.102s elapsed)
%GC time 0.1% (0.1% elapsed)
Alloc rate 511,162 bytes per MUT second
Productivity 99.8% of total user, 100.9% of total elapsed
However, if I change the type of test to test :: Word -> Word, then a lot of garbage is produced and the code runs 40x slower.
ghc -rtsopts -O2 testWord && ./testWord +RTS -s
[1 of 1] Compiling Main ( testWord.hs, testWord.o )
Linking testWord ...
5000000050000000
11,200,051,784 bytes allocated in the heap
1,055,520 bytes copied during GC
44,384 bytes maximum residency (2 sample(s))
21,152 bytes maximum slop
1 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 21700 colls, 0 par 0.077s 0.073s 0.0000s 0.0000s
Gen 1 2 colls, 0 par 0.000s 0.000s 0.0001s 0.0001s
INIT time 0.000s ( 0.000s elapsed)
MUT time 4.551s ( 4.556s elapsed)
GC time 0.077s ( 0.073s elapsed)
EXIT time 0.000s ( 0.000s elapsed)
Total time 4.630s ( 4.630s elapsed)
%GC time 1.7% (1.6% elapsed)
Alloc rate 2,460,957,186 bytes per MUT second
Productivity 98.3% of total user, 98.3% of total elapsed
Why does this happen? I expected the performance to be nearly identical?
(I'm using GHC version 8.0.1 on x86_64 GNU/Linux)
edit: I submitted a bug: https://ghc.haskell.org/trac/ghc/ticket/12354#ticket
This is probably mostly, though not exclusively, due to rewrite rules that exist for Int and not Word. I say that because if we use -fno-enable-rewrite-rules on the Int case we get a time that is much closer to, but not quite as bad as, the Word case.
% ghc -O2 so.hs -fforce-recomp -fno-enable-rewrite-rules && time ./so
[1 of 1] Compiling Main ( so.hs, so.o )
Linking so ...
5000000050000000
./so 1.45s user 0.03s system 99% cpu 1.489 total
If we dump the rewrite rules with -ddump-rule-rewrites and diff these rules then we see a rule that fires in the Int case and not the Word case:
Rule: fold/build
Before: GHC.Base.foldr
...
That particular rule is in Base 4.9 GHC.Base line 823 (N.B. I'm actually using GHC 7.10 myself) and does not mention Int explicitly. I'm curious why it isn't firing for Word, but don't have the time right now to investigate further.
As pointed out by dfeuer in a comment here, the Enum instance for Int is better than the one for Word:
Int:
instance Enum Int where
{-# INLINE enumFromTo #-}
enumFromTo (I# x) (I# y) = eftInt x y
{-# RULES
"eftInt" [~1] forall x y. eftInt x y = build (\ c n -> eftIntFB c n x y)
"eftIntList" [1] eftIntFB (:) [] = eftInt
#-}
{- Note [How the Enum rules work]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Phase 2: eftInt ---> build . eftIntFB
* Phase 1: inline build; eftIntFB (:) --> eftInt
* Phase 0: optionally inline eftInt
-}
{-# NOINLINE [1] eftInt #-}
eftInt :: Int# -> Int# -> [Int]
-- [x1..x2]
eftInt x0 y | isTrue# (x0 ># y) = []
| otherwise = go x0
where
go x = I# x : if isTrue# (x ==# y)
then []
else go (x +# 1#)
{-# INLINE [0] eftIntFB #-}
eftIntFB :: (Int -> r -> r) -> r -> Int# -> Int# -> r
eftIntFB c n x0 y | isTrue# (x0 ># y) = n
| otherwise = go x0
where
go x = I# x `c` if isTrue# (x ==# y)
then n
else go (x +# 1#)
-- Watch out for y=maxBound; hence ==, not >
-- Be very careful not to have more than one "c"
-- so that when eftInfFB is inlined we can inline
-- whatever is bound to "c"
Now Word actually uses the implementation for Integer
enumFromTo n1 n2 = map integerToWordX [wordToIntegerX n1 .. wordToIntegerX n2]
which uses
instance Enum Integer where
enumFromTo x lim = enumDeltaToInteger x 1 lim
Now enumDeltaToInteger has rewrite rules set up, but it turns out that Word’s enumFromTo is never inlined, so this setup has no chance of fusing here.
Copying this function into my test code causes GHC to inline it, the fold/build rule to fire, and cuts down allocation severely, but the conversion from and to Integer (which does allocate) remains.

Space leak when grouping key/value pairs in Haskell

I have a problem where my code is creating too many thunks (over 270MB) and consequently spends way too much time (over 70%) in GC when grouping values by key. I was wondering what the best way to group values by key.
The problem is that I have keys and values represented by vectors and I want to group the values by keys preserving the order. For example:
Input:
keys = 1 2 4 3 1 3 4 2 1
vals = 1 2 3 4 5 6 7 8 9
Output:
1 = 1,5,9
2 = 2,8
3 = 4,6
4 = 3,7
Compile options:
ghc --make -03 -fllvm histogram.hs
In imperative programming, I would just use a multimap so I decided to use a hash table and where the associated value is [Int] to store the grouped values. I am hoping there is a much better FP solution.
{-# LANGUAGE BangPatterns #-}
import qualified Data.HashMap.Strict as M
import qualified Data.Vector.Unboxed as V
n :: Int
n = 5000000
kv :: V.Vector (Int,Int)
kv = V.zip k v
where
k = V.generate n (\i -> i `mod` 1000)
v = V.generate n (\i -> i)
ts :: V.Vector (Int,Int) -> M.HashMap Int Int
ts vec =
V.foldl' (\ht (k, v) -> M.insertWith (+) k v ht) M.empty vec
ts2 :: V.Vector (Int,Int) -> M.HashMap Int [Int]
ts2 vec =
V.foldl' (\ht (!k, !v) -> M.insertWith (++) k [v] ht) M.empty vec
main :: IO ()
main = ts2 kv `seq` putStrLn "done"
Here's what spits out at runtime:
3,117,102,992 bytes allocated in the heap
1,847,205,880 bytes copied during GC
324,159,752 bytes maximum residency (12 sample(s))
6,502,224 bytes maximum slop
658 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 5991 colls, 0 par 0.58s 0.58s 0.0001s 0.0003s
Gen 1 12 colls, 0 par 0.69s 0.69s 0.0577s 0.3070s
INIT time 0.00s ( 0.00s elapsed)
MUT time 0.45s ( 0.45s elapsed)
GC time 1.27s ( 1.27s elapsed)
EXIT time 0.03s ( 0.03s elapsed)
Total time 1.75s ( 1.75s elapsed)
%GC time 72.7% (72.8% elapsed)
Alloc rate 6,933,912,935 bytes per MUT second
Productivity 27.3% of total user, 27.3% of total elapsed
You can see it spends a lot of time in GC so I decided to use bangs to make the list concatenation strict. I guess the ++ is quite expensive too but don't know a workaround around this.
Those strictness annotations are useless. They're forcing only the first constructor of the lists.
Even worse, it appears you're attempting to left fold (++), which is never a good idea. It results in lots of useless copying of intermediate lists, even when it's made fully strict.
You should fold to a [Int] -> [Int] value, instead. That will get rid of the multiple useless allocations. I'm on mobile, so I can't really provide full example code. The main idea is that you change the loop to M.insertWith (.) k (v:) and then map ($ [] ) over the values in the HashMap after the fold.
The bulk of your problem is due to (++) leading to "lots of useless copying of intermediate lists", as Carl puts it in his answer. Having played with a few different approaches at replacing (++), I got the best results thus far by switching to Data.IntMap.Strict from containers (just to take advantage of the less stern API - I don't know which implementation is more efficient per se) and using its alter function to prepend the vector elements without creating singleton lists:
import qualified Data.IntMap.Strict as M
import qualified Data.Vector.Unboxed as V
n :: Int
n = 5000000
kv :: V.Vector (Int,Int)
kv = V.zip k v
where
k = V.generate n (\i -> i `mod` 1000)
v = V.generate n (\i -> i)
ts2 :: V.Vector (Int,Int) -> M.IntMap [Int]
ts2 vec =
V.foldl' (\ht (k, v) -> M.alter (prep v) k ht) M.empty vec
where
prep x = Just . maybe [x] (x:)
main :: IO ()
main = print $ M.foldl' (+) 0 $ M.map length $ ts2 kv
The second best solution was using
\ht (k, v) -> M.insertWith (\(x:_) -> (x :)) k [v] ht
as the fold operator. That works with both Data.IntMap.Strict and Data.HashMap.Strict, with similar results performance-wise.
N.B.: Note that in all cases, your original implementation included, the vector elements are being prepended, rather than appended, to the lists. Your problems would be much more serious if you were appending the elements, as repeatedly appending to an empty list with (++) is quadratic in the number of elements.
I tried to run your code on my host and I am not able to reproduce your profile:
runhaskell test8.hs +RTS -sstderr
done
120,112 bytes allocated in the heap
3,520 bytes copied during GC
68,968 bytes maximum residency (1 sample(s))
12,952 bytes maximum slop
1 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 0 colls, 0 par 0.00s 0.00s 0.0000s 0.0000s
Gen 1 1 colls, 0 par 0.00s 0.09s 0.0909s 0.0909s
INIT time 0.00s ( 0.01s elapsed)
MUT time 0.00s ( 29.21s elapsed)
GC time 0.00s ( 0.09s elapsed)
EXIT time 0.00s ( 0.09s elapsed)
Total time 0.01s ( 29.40s elapsed)
%GC time 5.7% (0.3% elapsed)
Alloc rate 381,307,936 bytes per MUT second
Productivity 91.1% of total user, 0.0% of total elapsed
Can you pls outline some more detail about how you are testing the code? If you are using ghci then
$ ghci -fobject-code
we probably need to use -fobject-code to eliminate any space leaks from the ghci. If you have already tried the ghci option, assuming that you are using ghci, I will edit my answer. At this point, I would like to reproduce the issue you are seeing.
Update:
# duplode : Thank you for the pointers. I am going to delete the previous output no one objects to it as it is misleading.
I have been able to reduce the gc overhead by a bit using one of the following options. I am getting some benefits but the overhead is still in the 49 - 50 % range:
ts3 :: V.Vector (Int, Int) -> M.HashMap Int [Int]
ts3 vec =
V.foldl (\ht (!k, !v) ->
let
element = M.lookup k ht in
case element of
Nothing -> M.insert k [v] ht
Just aList -> M.insert k (v:aList) ht) M.empty vec
ts4 :: V.Vector (Int,Int) -> M.HashMap Int [Int]
ts4 vec =
let initMap = V.foldl (\ht (!k,_) -> M.insert k [] ht) M.empty vec
in
V.foldl (\ht (!k, !v) -> M.adjust(\x -> v:x) k ht) initMap vec
The adjust seemed a bit better, but they results seem similar to a straight lookup. With ts4 using adjust:
calling ts4 done.
3,838,059,320 bytes allocated in the heap
2,041,603,344 bytes copied during GC
377,412,728 bytes maximum residency (6 sample(s))
7,725,944 bytes maximum slop
737 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 7260 colls, 0 par 1.32s 1.45s 0.0002s 0.0013s
Gen 1 6 colls, 0 par 0.88s 1.40s 0.2328s 0.9236s
INIT time 0.00s ( 0.00s elapsed)
MUT time 2.18s ( 2.21s elapsed)
GC time 2.19s ( 2.85s elapsed)
RP time 0.00s ( 0.00s elapsed)
PROF time 0.00s ( 0.00s elapsed)
EXIT time 0.01s ( 0.07s elapsed)
Total time 4.38s ( 5.13s elapsed)
%GC time 50.0% (55.5% elapsed)
Alloc rate 1,757,267,879 bytes per MUT second
Productivity 50.0% of total user, 42.7% of total elapsed
Using the simple lookup/update (imperative style of updating a map)
calling ts3 done.
3,677,137,816 bytes allocated in the heap
2,040,053,712 bytes copied during GC
395,867,512 bytes maximum residency (6 sample(s))
7,326,104 bytes maximum slop
769 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 6999 colls, 0 par 1.35s 1.51s 0.0002s 0.0037s
Gen 1 6 colls, 0 par 1.06s 2.16s 0.3601s 1.3175s
INIT time 0.00s ( 0.00s elapsed)
MUT time 1.89s ( 2.07s elapsed)
GC time 2.41s ( 3.67s elapsed)
RP time 0.00s ( 0.00s elapsed)
PROF time 0.00s ( 0.00s elapsed)
EXIT time 0.01s ( 0.08s elapsed)
Total time 4.31s ( 5.82s elapsed)
%GC time 55.9% (63.0% elapsed)
Alloc rate 1,942,816,558 bytes per MUT second
Productivity 44.1% of total user, 32.6% of total elapsed
I am interested in finding out as to how to reduce the time for lookup as show in the profile output below:
COST CENTRE MODULE %time %alloc
ts3.\ Main 54.1 91.4
ts3.\.element Main 19.0 2.9
ts3 Main 11.0 2.9
kv.k Main 6.5 1.4
kv.v Main 5.2 1.4
kv.k.\ Main 4.0 0.0
individual inherited
COST CENTRE MODULE no. entries %time %alloc %time %alloc
MAIN MAIN 72 0 0.0 0.0 100.0 100.0
main Main 158 0 0.0 0.0 0.0 0.0
CAF:main Main 143 0 0.0 0.0 84.2 97.1
main Main 144 1 0.0 0.0 84.2 97.1
ts3 Main 145 1 11.0 2.9 84.2 97.1
ts3.\ Main 156 5000000 54.1 91.4 73.2 94.3
ts3.\.element Main 157 5000000 19.0 2.9 19.0 2.9
CAF:kv Main 142 0 0.0 0.0 0.0 0.0
Code
-- ghc -O2 --make test8.hs -prof -auto-all -caf-all -fforce-recomp +RTS
-- ./test8 +RTS -p
{-# LANGUAGE BangPatterns #-}
import qualified Data.HashMap.Strict as M
import qualified Data.Vector.Unboxed as V
n :: Int
n = 5000000
kv :: V.Vector (Int,Int)
kv = V.zip (k) (v)
where
k = V.generate n (\i -> i `mod` 1000)
v = V.generate n (\i -> i)
ts :: V.Vector (Int,Int) -> M.HashMap Int Int
ts vec =
V.foldl' (\ht (k, v) -> M.insertWith (+) k v ht) M.empty vec
ts2 :: V.Vector (Int,Int) -> M.HashMap Int [Int]
ts2 vec =
V.foldl (\ht (!k, !v) -> M.insertWith (++) k [v] ht) M.empty vec
ts3 :: V.Vector (Int, Int) -> M.HashMap Int [Int]
ts3 vec =
V.foldl (\ht (!k, !v) ->
let
element = M.lookup k ht in
case element of
Nothing -> M.insert k [v] ht
Just aList -> M.insert k (v:aList) ht) M.empty vec
ts4 :: V.Vector (Int,Int) -> M.HashMap Int [Int]
ts4 vec =
let initMap = V.foldl (\ht (!k,_) -> M.insert k [] ht) M.empty vec
in
V.foldl (\ht (!k, !v) -> M.adjust(\x -> v:x) k ht) initMap vec
main :: IO ()
main = ts3 kv `seq` putStrLn "calling ts3 done."
main1 = do
if x == y then
putStrLn "Algos Match"
else
putStrLn "Error"
where
x = ts2 kv
y = ts4 kv

Parallel Haskell - GHC GC'ing sparks

I have a program I'm trying to parallelize (full paste with runnable code here).
I've profiled and found that the majority of time is spent in findNearest which is essentially a simple foldr over a large Data.Map.
findNearest :: RGB -> M.Map k RGB -> (k, Word32)
findNearest rgb m0 =
M.foldrWithKey' minDistance (k0, distance rgb r0) m0
where (k0, r0) = M.findMin m0
minDistance k r x#(_, d1) =
-- Euclidean distance in RGB-space
let d0 = distance rgb r
in if d0 < d1 then (k, d0) else x
parFindNearest is supposed to execute findNearest in parallel over subtrees of the larger Map.
parFindNearest :: NFData k => RGB -> M.Map k RGB -> (k, Word32)
parFindNearest rgb = minimumBy (comparing snd)
. parMap rdeepseq (findNearest rgb)
. M.splitRoot
Unfortunately GHC GC's most my sparks before they are converted into useful parallelism.
Here's the result of compiling with ghc -O2 -threaded and running with +RTS -s -N2
839,892,616 bytes allocated in the heap
123,999,464 bytes copied during GC
5,320,184 bytes maximum residency (19 sample(s))
3,214,200 bytes maximum slop
16 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 1550 colls, 1550 par 0.23s 0.11s 0.0001s 0.0004s
Gen 1 19 colls, 18 par 0.11s 0.06s 0.0030s 0.0052s
Parallel GC work balance: 16.48% (serial 0%, perfect 100%)
TASKS: 6 (1 bound, 5 peak workers (5 total), using -N2)
SPARKS: 215623 (1318 converted, 0 overflowed, 0 dud, 198111 GC'd, 16194 fizzled)
INIT time 0.00s ( 0.00s elapsed)
MUT time 3.72s ( 3.66s elapsed)
GC time 0.34s ( 0.17s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 4.07s ( 3.84s elapsed)
Alloc rate 225,726,318 bytes per MUT second
Productivity 91.6% of total user, 97.1% of total elapsed
gc_alloc_block_sync: 9862
whitehole_spin: 0
gen[0].sync: 0
gen[1].sync: 2103
As you can see, the majority of sparks are GC'd or fizzle before being converted. I've tried experimenting with different strictness, having findNearest return a custom strict pair data type instead of a tuple
, or using rdeepseq from Control.Parallel.Strategies, but my sparks are still GC'd.
I'd like to know
why are my sparks being GC'd before being converted?
how can I change my program to take advantage of parallelism?
I'm not at expert in parallel strategies, so I may be completely wrong. But:
If you disable GC by setting big enough allocation area (e.g. using -A20M runtime option), you'll see that most of sparks are fizzled, not GC'd. It means they are evaluated by ordinary program flow before the corresponding spark finished.
minimumBy forces parMap results immediately, starting evaluating them. At the same time, sparks are scheduled and executed, but it is too late. When spark finished, the value is already evaluated by the main thread. Without -A20M, sparks are GC'd because the value is evaluated and GC'd even before the spark is scheduled.
Here is a simplified test case:
import Control.Parallel.Strategies
f :: Integer -> Integer
f 0 = 1
f n = n * f (n - 1)
main :: IO ()
main = do
let l = [n..n+10]
n = 1
res = parMap rdeepseq f l
print res
In that case all the sparks are fizzled:
SPARKS: 11 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 11 fizzled)
(Some times they are GC'd)
But if I yield main thread before printing results,
import Control.Parallel.Strategies
import Control.Concurrent
f :: Integer -> Integer
f 0 = 1
f n = n * f (n - 1)
main :: IO ()
main = do
let l = [n..n+10]
n = 1
res = parMap rdeepseq f l
res `seq` threadDelay 1
print res
Then all the sparks are converted:
SPARKS: 11 (11 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled)
So, looks like you have not enough sparks (try to set l = [n..n+1000] in my example), and they are not heavy enough (try to set n = 1000 in my example).

Parallel Haskell in order to find the divisors of a huge number

I have written the following program using Parallel Haskell to find the divisors of 1 billion.
import Control.Parallel
parfindDivisors :: Integer->[Integer]
parfindDivisors n = f1 `par` (f2 `par` (f1 ++ f2))
where f1=filter g [1..(quot n 4)]
f2=filter g [(quot n 4)+1..(quot n 2)]
g z = n `rem` z == 0
main = print (parfindDivisors 1000000000)
I've compiled the program with ghc -rtsopts -threaded findDivisors.hs and I run it with:
findDivisors.exe +RTS -s -N2 -RTS
I have found a 50% speedup compared to the simple version which is this:
findDivisors :: Integer->[Integer]
findDivisors n = filter g [1..(quot n 2)]
where g z = n `rem` z == 0
My processor is a dual core 2 duo from Intel.
I was wondering if there can be any improvement in above code. Because in the statistics that program prints says:
Parallel GC work balance: 1.01 (16940708 / 16772868, ideal 2)
and SPARKS: 2 (1 converted, 0 overflowed, 0 dud, 0 GC'd, 1 fizzled)
What are these converted , overflowed , dud, GC'd, fizzled and how can help to improve the time.
IMO, the Par monad helps for reasoning about parallelism. It's a little higher-level than dealing with par and pseq.
Here's a rewrite of parfindDivisors using the Par monad. Note that this is essentially the same as your algorithm:
import Control.Monad.Par
findDivisors :: Integer -> [Integer]
findDivisors n = runPar $ do
[f0, f1] <- sequence [new, new]
fork $ put f0 (filter g [1..(quot n 4)])
fork $ put f1 (filter g [(quot n 4)+1..(quot n 2)])
[f0', f1'] <- sequence [get f0, get f1]
return $ f0' ++ f1'
where g z = n `rem` z == 0
Compiling that with -O2 -threaded -rtsopts -eventlog and running with +RTS -N2 -s yields the following relevant runtime stats:
36,000,130,784 bytes allocated in the heap
3,165,440 bytes copied during GC
48,464 bytes maximum residency (1 sample(s))
Tot time (elapsed) Avg pause Max pause
Gen 0 35162 colls, 35161 par 0.39s 0.32s 0.0000s 0.0006s
Gen 1 1 colls, 1 par 0.00s 0.00s 0.0002s 0.0002s
Parallel GC work balance: 1.32 (205296 / 155521, ideal 2)
MUT time 42.68s ( 21.48s elapsed)
GC time 0.39s ( 0.32s elapsed)
Total time 43.07s ( 21.80s elapsed)
Alloc rate 843,407,880 bytes per MUT second
Productivity 99.1% of total user, 195.8% of total elapsed
The productivity is very high. To improve the GC work balance slightly we can increase the GC allocation area size; run with +RTS -N2 -s -A128M, for example:
36,000,131,336 bytes allocated in the heap
47,088 bytes copied during GC
49,808 bytes maximum residency (1 sample(s))
Tot time (elapsed) Avg pause Max pause
Gen 0 135 colls, 134 par 0.19s 0.10s 0.0007s 0.0009s
Gen 1 1 colls, 1 par 0.00s 0.00s 0.0010s 0.0010s
Parallel GC work balance: 1.62 (2918 / 1801, ideal 2)
MUT time 42.65s ( 21.49s elapsed)
GC time 0.20s ( 0.10s elapsed)
Total time 42.85s ( 21.59s elapsed)
Alloc rate 843,925,806 bytes per MUT second
Productivity 99.5% of total user, 197.5% of total elapsed
But this is really just nitpicking. The real story comes from ThreadScope:
The utilisation is essentially maxed out for two cores, so additional significant parallelization (for two cores) is probably not going to happen.
Some good notes on the Par monad are here.
UPDATE
A rewrite of the alternative algorithm using Par looks something like this:
findDivisors :: Integer -> [Integer]
findDivisors n = let sqrtn = floor (sqrt (fromInteger n)) in runPar $ do
[a, b] <- sequence [new, new]
fork $ put a [a | (a, b) <- [quotRem n x | x <- [1..sqrtn]], b == 0]
firstDivs <- get a
fork $ put b [n `quot` x | x <- firstDivs, x /= sqrtn]
secondDivs <- get b
return $ firstDivs ++ secondDivs
But you're right in that this will not get any gains from parallelism due to the dependence on firstDivs.
You can still incorporate parallelism here, by getting Strategies involved to evaluate the elements of the list comprehensions in parallel. Something like:
import Control.Monad.Par
import Control.Parallel.Strategies
findDivisors :: Integer -> [Integer]
findDivisors n = let sqrtn = floor (sqrt (fromInteger n)) in runPar $ do
[a, b] <- sequence [new, new]
fork $ put a
([a | (a, b) <- [quotRem n x | x <- [1..sqrtn]], b == 0] `using` parListChunk 2 rdeepseq)
firstDivs <- get a
fork $ put b
([n `quot` x | x <- firstDivs, x /= sqrtn] `using` parListChunk 2 rdeepseq)
secondDivs <- get b
return $ firstDivs ++ secondDivs
and running this gives some stats like
3,388,800 bytes allocated in the heap
43,656 bytes copied during GC
68,032 bytes maximum residency (1 sample(s))
Tot time (elapsed) Avg pause Max pause
Gen 0 5 colls, 4 par 0.00s 0.00s 0.0000s 0.0001s
Gen 1 1 colls, 1 par 0.00s 0.00s 0.0002s 0.0002s
Parallel GC work balance: 1.22 (2800 / 2290, ideal 2)
MUT time (elapsed) GC time (elapsed)
Task 0 (worker) : 0.01s ( 0.01s) 0.00s ( 0.00s)
Task 1 (worker) : 0.01s ( 0.01s) 0.00s ( 0.00s)
Task 2 (bound) : 0.01s ( 0.01s) 0.00s ( 0.00s)
Task 3 (worker) : 0.01s ( 0.01s) 0.00s ( 0.00s)
SPARKS: 50 (49 converted, 0 overflowed, 0 dud, 0 GC'd, 1 fizzled)
MUT time 0.01s ( 0.00s elapsed)
GC time 0.00s ( 0.00s elapsed)
Total time 0.01s ( 0.01s elapsed)
Alloc rate 501,672,834 bytes per MUT second
Productivity 85.0% of total user, 95.2% of total elapsed
Here almost 50 sparks were converted - that is, meaningful parallel work was being done - but the computations were not large enough to observe any wall-clock gains from parallelism. Any gains were probably offset by the overhead of scheduling computations in the threaded runtime.
I think this page explains it better than I could:
http://www.haskell.org/haskellwiki/ThreadScope_Tour/SparkOverview
I also found these slides interesting:
http://haskellwiki.gitit.net/Upload/HIW2011-Talk-Coutts.pdf
My modifying the original code with the following I have better speedup but this code I think that cannot be parallelised
findDivisors2 :: Integer->[Integer]
findDivisors2 n= let firstDivs=[a|(a,b)<-[quotRem n x|x<-[1..sqrtn]],b==0]
secondDivs=[n `quot` x|x<-firstDivs,x/=sqrtn]
sqrtn = floor(sqrt (fromInteger n))
in firstDivs ++ secondDivs
I tried to parallelise the code with this:
parfindDivisors2 :: Integer->[Integer]
parfindDivisors2 n= let firstDivs=[a|(a,b)<-[quotRem n x|x<-[1..sqrtn]],b==0]
secondDivs=[n `quot` x|x<-firstDivs,x/=sqrtn]
sqrtn = floor(sqrt (fromInteger n))
in secondDivs `par` firstDivs++secondDivs
Instead of reducing the time I have doubled the time. I think that this happens because the findDivisors2 have strong data dependence while the first algorithm findDivisors does not.
Any comments are welcome.

Resources