Is there a parallel find in Haskell? - haskell

I have some kind of brute force problem I like to solve in Haskell. My machine has 16 cores so I want to speed up my current algorithm a bit.
I have a method "tryCombination" which returns either a Just (String) or a Nothing. My loop looks like this:
findSolution = find (isJust) [tryCombination a1 a2 a3 n z p |
a1 <- [600..700],
a2 <- [600..700],
a3 <- [600..700],
n <- [1..100],
....
I know there is a special parMap to parallelize a map function. A mapFind could be tricky as it is not predictable, if a thread really finds the first occurence. But is there something like a mapAny to speed up the search?
EDIT:
I rewrote the code using the "withStrategy (parList rseq)" snippet. The status report looks like this:
38,929,334,968 bytes allocated in the heap
2,215,280,048 bytes copied during GC
3,505,624 bytes maximum residency (795 sample(s))
202,696 bytes maximum slop
15 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 44922 colls, 44922 par 37.33s 8.34s 0.0002s 0.0470s
Gen 1 795 colls, 794 par 7.58s 1.43s 0.0018s 0.0466s
Parallel GC work balance: 4.36% (serial 0%, perfect 100%)
TASKS: 10 (1 bound, 9 peak workers (9 total), using -N8)
SPARKS: 17576 (8198 converted, 9378 overflowed, 0 dud, 0 GC'd, 0 fizzled)
INIT time 0.00s ( 0.00s elapsed)
MUT time 81.79s ( 36.37s elapsed)
GC time 44.91s ( 9.77s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 126.72s ( 46.14s elapsed)
Alloc rate 475,959,220 bytes per MUT second
Productivity 64.6% of total user, 177.3% of total elapsed
gc_alloc_block_sync: 834851
whitehole_spin: 0
gen[0].sync: 10
gen[1].sync: 3724
As I already mentioned (see my comments), all the cores are working for only three seconds (wenn all the sparks are processed). The following 30s all the work is done by a single core. How can I optimize still more?
Some more EDIT:
I now gave "withStrategy (parBuffer 10 rdeepseq)" a try and fiddled around with different buffer sizes:
Buffersize GC work Balance MUT GC
10 50% 11,69s 0,94s
100 47% 12,31s 1,67s
500 40% 11,5 s 1,35s
5000 21% 11,47s 2,25s
First of all I can say, that this is a big improvement against the 59s it took without any multithreading. The second conclusion is, that the buffer size should be as small as possible but bigger than the number of cores.
But the best is, that I have neither overflowed nor fizzled sparks any more. All were converted successfully.

Depending on the lazyness of tryCombination and the desired parallelization, one of these might do what you want:
import Control.Parallel.Strategies
findSolution =
find (isJust) $
withStrategy (parList rseq) $
[ tryCombination a1 a2 a3 n z p
| a1 <- [600..700]
, a2 <- [600..700]
, a3 <- [600..700]
, n <- [1..100]]
This paralleizes the work performed by tryCombination to figure out whether it is a Just or a Nothing, but not the actual result in the Just.
If there is no such lazyness to be exploited and the result type is simple, it might work better to write
findSolution =
find (isJust) $
withStrategy (parList rdeepseq) $
[ tryCombination a1 a2 a3 n z p
| a1 <- [600..700]
, a2 <- [600..700]
, a3 <- [600..700]
, n <- [1..100]]

Related

Frequent GC preventing sparks from running in parallel

I tried running the first example here: http://chimera.labs.oreilly.com/books/1230000000929/ch03.html
Code: https://github.com/simonmar/parconc-examples/blob/master/strat.hs
import Control.Parallel
import Control.Parallel.Strategies (rpar, Strategy, using)
import Text.Printf
import System.Environment
-- <<fib
fib :: Integer -> Integer
fib 0 = 1
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
-- >>
main = print pair
where
pair =
-- <<pair
(fib 35, fib 36) `using` parPair
-- >>
-- <<parPair
parPair :: Strategy (a,b)
parPair (a,b) = do
a' <- rpar a
b' <- rpar b
return (a',b')
-- >>
I've built using ghc 7.10.2 (on OSX, with a multicore machine) using the following command:
ghc -O2 strat.hs -threaded -rtsopts -eventlog
And run using:
./strat +RTS -N2 -l -s
I expected the 2 fibs calculations to be run in parallel (previous chapter examples worked as expected, so no setup issues), and I wasn't getting any speedup at all, as seen here:
% ./strat +RTS -N2 -l -s
(14930352,24157817)
3,127,178,800 bytes allocated in the heap
6,323,360 bytes copied during GC
70,000 bytes maximum residency (2 sample(s))
31,576 bytes maximum slop
2 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 5963 colls, 5963 par 0.179s 0.074s 0.0000s 0.0001s
Gen 1 2 colls, 1 par 0.000s 0.000s 0.0001s 0.0001s
Parallel GC work balance: 2.34% (serial 0%, perfect 100%)
TASKS: 6 (1 bound, 5 peak workers (5 total), using -N2)
SPARKS: 2 (0 converted, 0 overflowed, 0 dud, 1 GC'd, 1 fizzled)
INIT time 0.000s ( 0.001s elapsed)
MUT time 1.809s ( 1.870s elapsed)
GC time 0.180s ( 0.074s elapsed)
EXIT time 0.000s ( 0.000s elapsed)
Total time 1.991s ( 1.945s elapsed)
Alloc rate 1,728,514,772 bytes per MUT second
Productivity 91.0% of total user, 93.1% of total elapsed
gc_alloc_block_sync: 238
whitehole_spin: 0
gen[0].sync: 0
gen[1].sync: 0
-N1 gets similar results (omitted).
The # of GC collections seemed suspicious, as pointed out by others in #haskell-beginners, so I tried adding -A16M when running. The results looked much more in line with expectations:
% ./strat +RTS -N2 -l -s -A16M
(14930352,24157817)
3,127,179,920 bytes allocated in the heap
260,960 bytes copied during GC
69,984 bytes maximum residency (2 sample(s))
28,320 bytes maximum slop
33 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 115 colls, 115 par 0.105s 0.002s 0.0000s 0.0003s
Gen 1 2 colls, 1 par 0.000s 0.000s 0.0002s 0.0002s
Parallel GC work balance: 71.25% (serial 0%, perfect 100%)
TASKS: 6 (1 bound, 5 peak workers (5 total), using -N2)
SPARKS: 2 (1 converted, 0 overflowed, 0 dud, 0 GC'd, 1 fizzled)
INIT time 0.001s ( 0.001s elapsed)
MUT time 1.579s ( 1.087s elapsed)
GC time 0.106s ( 0.002s elapsed)
EXIT time 0.000s ( 0.000s elapsed)
Total time 1.686s ( 1.091s elapsed)
Alloc rate 1,980,993,138 bytes per MUT second
Productivity 93.7% of total user, 144.8% of total elapsed
gc_alloc_block_sync: 27
whitehole_spin: 0
gen[0].sync: 0
gen[1].sync: 0
The question is: Why is this the behavior? Even with frequent GC, I still intuitively expect the 2 sparks to run in parallel in the other 90% of the running time.
Yes, this is actually a bug in GHC 8.0.1 and earlier (I'm working on fixing it for 8.0.2). The problem is that the fib 35 and fib 36 expressions are constant and so GHC lifts them to the top level as CAFs, and the RTS was wrongly assuming that the CAFs were unreachable and so garbage collecting the sparks.
You can work around it by making the expressions non-constant by passing in parameters on the command line:
main = do
[a,b] <- map read <$> getArgs
let pair = (fib a, fib b) `using` parPair
print pair
and then run the program with ./strat 35 36.

Why does `-threaded` make it slower?

A simple plan:
import qualified Data.ByteString.Lazy.Char8 as BS
main = do
wc <- length . BS.words <$> BS.getContents
print wc
Build for speed:
ghc -fllvm -O2 -threaded -rtsopts Words.hs
More CPUs means more slowly?
$ time ./Words +RTS -qa -N1 < big.txt
331041862
real 0m25.963s
user 0m21.747s
sys 0m1.528s
$ time ./Words +RTS -qa -N2 < big.txt
331041862
real 0m36.410s
user 0m34.910s
sys 0m6.892s
$ time ./Words +RTS -qa -N4 < big.txt
331041862
real 0m42.150s
user 0m55.393s
sys 0m16.227s
For good measure:
$time wc -w big.txt
331041862 big.txt
real 0m8.277s
user 0m7.553s
sys 0m0.529s
Clearly, this is a single-threaded activity. Still, I wonder why it slows down so much.
Also, do you have any tips, how I can make it competitive with wc?
It's GC. Executed your program with +RTS -s and the results told everything.
-N1
D:\>a +RTS -qa -N1 -s < lorem.txt
15470835
4,558,095,152 bytes allocated in the heap
1,746,720 bytes copied during GC
77,936 bytes maximum residency (118 sample(s))
131,856 bytes maximum slop
2 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 8519 colls, 0 par 0.016s 0.021s 0.0000s 0.0001s
Gen 1 118 colls, 0 par 0.000s 0.004s 0.0000s 0.0001s
TASKS: 3 (1 bound, 2 peak workers (2 total), using -N1)
SPARKS: 0 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled)
INIT time 0.000s ( 0.001s elapsed)
MUT time 0.842s ( 0.855s elapsed)
GC time 0.016s ( 0.025s elapsed)
EXIT time 0.016s ( 0.000s elapsed)
Total time 0.874s ( 0.881s elapsed)
Alloc rate 5,410,809,512 bytes per MUT second
Productivity 98.2% of total user, 97.4% of total elapsed
gc_alloc_block_sync: 0
whitehole_spin: 0
gen[0].sync: 0
gen[1].sync: 0
-N4
D:\>a +RTS -qa -N4 -s < lorem.txt
15470835
4,558,093,352 bytes allocated in the heap
1,720,232 bytes copied during GC
77,936 bytes maximum residency (113 sample(s))
160,432 bytes maximum slop
4 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 8524 colls, 8524 par 4.742s 1.678s 0.0002s 0.0499s
Gen 1 113 colls, 112 par 0.031s 0.027s 0.0002s 0.0099s
Parallel GC work balance: 1.40% (serial 0%, perfect 100%)
TASKS: 6 (1 bound, 5 peak workers (5 total), using -N4)
SPARKS: 0 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled)
INIT time 0.000s ( 0.001s elapsed)
MUT time 1.950s ( 1.415s elapsed)
GC time 4.774s ( 1.705s elapsed)
EXIT time 0.000s ( 0.000s elapsed)
Total time 6.724s ( 3.121s elapsed)
Alloc rate 2,337,468,786 bytes per MUT second
Productivity 29.0% of total user, 62.5% of total elapsed
gc_alloc_block_sync: 21082
whitehole_spin: 0
gen[0].sync: 0
gen[1].sync: 0
The most significant parts are
Tot time (elapsed) Avg pause Max pause
Gen 0 8524 colls, 8524 par 4.742s 1.678s 0.0002s 0.0499s
and
Parallel GC work balance: 1.40% (serial 0%, perfect 100%)
When -threaded switch is on, at runtime ghc will try its best to balance any work among threads as far as possible. Your whole program is a sequential process so the only work can be moved to other threads are GC, while your program in fact cannot be GCed in parallel so these threads wait for one another to complete their job, resulting a lot of time wasted on synchronization.
If you tell the runtime not to balance among threads by +RTS -qm then sometimes -N4 is as fast as -N1.

Parallel Fibonacci example from "Parallel and Concurrent Programming" [duplicate]

I tried running the first example here: http://chimera.labs.oreilly.com/books/1230000000929/ch03.html
Code: https://github.com/simonmar/parconc-examples/blob/master/strat.hs
import Control.Parallel
import Control.Parallel.Strategies (rpar, Strategy, using)
import Text.Printf
import System.Environment
-- <<fib
fib :: Integer -> Integer
fib 0 = 1
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
-- >>
main = print pair
where
pair =
-- <<pair
(fib 35, fib 36) `using` parPair
-- >>
-- <<parPair
parPair :: Strategy (a,b)
parPair (a,b) = do
a' <- rpar a
b' <- rpar b
return (a',b')
-- >>
I've built using ghc 7.10.2 (on OSX, with a multicore machine) using the following command:
ghc -O2 strat.hs -threaded -rtsopts -eventlog
And run using:
./strat +RTS -N2 -l -s
I expected the 2 fibs calculations to be run in parallel (previous chapter examples worked as expected, so no setup issues), and I wasn't getting any speedup at all, as seen here:
% ./strat +RTS -N2 -l -s
(14930352,24157817)
3,127,178,800 bytes allocated in the heap
6,323,360 bytes copied during GC
70,000 bytes maximum residency (2 sample(s))
31,576 bytes maximum slop
2 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 5963 colls, 5963 par 0.179s 0.074s 0.0000s 0.0001s
Gen 1 2 colls, 1 par 0.000s 0.000s 0.0001s 0.0001s
Parallel GC work balance: 2.34% (serial 0%, perfect 100%)
TASKS: 6 (1 bound, 5 peak workers (5 total), using -N2)
SPARKS: 2 (0 converted, 0 overflowed, 0 dud, 1 GC'd, 1 fizzled)
INIT time 0.000s ( 0.001s elapsed)
MUT time 1.809s ( 1.870s elapsed)
GC time 0.180s ( 0.074s elapsed)
EXIT time 0.000s ( 0.000s elapsed)
Total time 1.991s ( 1.945s elapsed)
Alloc rate 1,728,514,772 bytes per MUT second
Productivity 91.0% of total user, 93.1% of total elapsed
gc_alloc_block_sync: 238
whitehole_spin: 0
gen[0].sync: 0
gen[1].sync: 0
-N1 gets similar results (omitted).
The # of GC collections seemed suspicious, as pointed out by others in #haskell-beginners, so I tried adding -A16M when running. The results looked much more in line with expectations:
% ./strat +RTS -N2 -l -s -A16M
(14930352,24157817)
3,127,179,920 bytes allocated in the heap
260,960 bytes copied during GC
69,984 bytes maximum residency (2 sample(s))
28,320 bytes maximum slop
33 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 115 colls, 115 par 0.105s 0.002s 0.0000s 0.0003s
Gen 1 2 colls, 1 par 0.000s 0.000s 0.0002s 0.0002s
Parallel GC work balance: 71.25% (serial 0%, perfect 100%)
TASKS: 6 (1 bound, 5 peak workers (5 total), using -N2)
SPARKS: 2 (1 converted, 0 overflowed, 0 dud, 0 GC'd, 1 fizzled)
INIT time 0.001s ( 0.001s elapsed)
MUT time 1.579s ( 1.087s elapsed)
GC time 0.106s ( 0.002s elapsed)
EXIT time 0.000s ( 0.000s elapsed)
Total time 1.686s ( 1.091s elapsed)
Alloc rate 1,980,993,138 bytes per MUT second
Productivity 93.7% of total user, 144.8% of total elapsed
gc_alloc_block_sync: 27
whitehole_spin: 0
gen[0].sync: 0
gen[1].sync: 0
The question is: Why is this the behavior? Even with frequent GC, I still intuitively expect the 2 sparks to run in parallel in the other 90% of the running time.
Yes, this is actually a bug in GHC 8.0.1 and earlier (I'm working on fixing it for 8.0.2). The problem is that the fib 35 and fib 36 expressions are constant and so GHC lifts them to the top level as CAFs, and the RTS was wrongly assuming that the CAFs were unreachable and so garbage collecting the sparks.
You can work around it by making the expressions non-constant by passing in parameters on the command line:
main = do
[a,b] <- map read <$> getArgs
let pair = (fib a, fib b) `using` parPair
print pair
and then run the program with ./strat 35 36.

Profiling Two Functions That Sum Large List

I just started reading Parallel and Concurrent Programming in Haskell.
I wrote two programs that, I believe, sums up a list in 2 ways:
running rpar (force (sum list))
splitting up the list, running the above command on each list, and adding each
Here's the code:
import Control.Parallel.Strategies
import Control.DeepSeq
import System.Environment
main :: IO ()
main = do
[n] <- getArgs
[single, faster] !! (read n - 1)
single :: IO ()
single = print . runEval $ rpar (sum list)
faster :: IO ()
faster = print . runEval $ do
let (as, bs) = splitAt ((length list) `div` 2) list
res1 <- rpar (sum as)
res2 <- rpar (sum bs)
return (res1 + res2)
list :: [Integer]
list = [1..10000000]
Compile with parallelization enabled (-threaded)
C:\Users\k\Workspace\parallel_concurrent_haskell>ghc Sum.hs -O2 -threaded -rtsopts
[1 of 1] Compiling Main ( Sum.hs, Sum.o )
Linking Sum.exe ...
Results of single Program
C:\Users\k\Workspace\parallel_concurrent_haskell>Sum 1 +RTS -s -N2
50000005000000
960,065,896 bytes allocated in the heap
363,696 bytes copied during GC
43,832 bytes maximum residency (2 sample(s))
57,016 bytes maximum slop
2 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 1837 colls, 1837 par 0.00s 0.01s 0.0000s 0.0007s
Gen 1 2 colls, 1 par 0.00s 0.00s 0.0002s 0.0003s
Parallel GC work balance: 0.18% (serial 0%, perfect 100%)
TASKS: 4 (1 bound, 3 peak workers (3 total), using -N2)
SPARKS: 1 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 1 fizzled)
INIT time 0.00s ( 0.00s elapsed)
MUT time 0.27s ( 0.27s elapsed)
GC time 0.00s ( 0.01s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 0.27s ( 0.28s elapsed)
Alloc rate 3,614,365,726 bytes per MUT second
Productivity 100.0% of total user, 95.1% of total elapsed
gc_alloc_block_sync: 573
whitehole_spin: 0
gen[0].sync: 0
gen[1].sync: 0
Run with faster
C:\Users\k\Workspace\parallel_concurrent_haskell>Sum 2 +RTS -s -N2
50000005000000
1,600,100,336 bytes allocated in the heap
1,477,564,464 bytes copied during GC
400,027,984 bytes maximum residency (14 sample(s))
70,377,336 bytes maximum slop
911 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 3067 colls, 3067 par 1.05s 0.68s 0.0002s 0.0021s
Gen 1 14 colls, 13 par 1.98s 1.53s 0.1093s 0.5271s
Parallel GC work balance: 0.00% (serial 0%, perfect 100%)
TASKS: 4 (1 bound, 3 peak workers (3 total), using -N2)
SPARKS: 2 (0 converted, 0 overflowed, 0 dud, 1 GC'd, 1 fizzled)
INIT time 0.00s ( 0.00s elapsed)
MUT time 0.38s ( 1.74s elapsed)
GC time 3.03s ( 2.21s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 3.42s ( 3.95s elapsed)
Alloc rate 4,266,934,229 bytes per MUT second
Productivity 11.4% of total user, 9.9% of total elapsed
gc_alloc_block_sync: 335
whitehole_spin: 0
gen[0].sync: 0
gen[1].sync: 0
Why did single complete in 0.28 seconds, but faster (poorly named, evidently) took 3.95 seconds?
I am no expert in haskell-specific profiling, but I can see several possible problems in faster. You are walking the input list at least three times: once to get its length, once for splitAt (maybe it is twice, I'm not totally sure how this is implemented), and then again to read and sum its elements. In single, the list is walked only once.
You also hold the entire list in memory at once with faster, but with single haskell can process it lazily, and GC as you go. If you look at the profiling output, you can see that faster is copying many more bytes during GC: over 3,000 times more! faster also needed 400MB of memory all at once, where single needed only 40KB at a time. So the garbage collector had a larger space to keep scanning over.
Another big issue: you allocate a ton of new cons cells in faster, to hold the two intermediate sub-lists. Even if it could all be GCed right away, this is a lot of time spent allocating. It's more expensive than just doing the addition to begin with! So even before you start adding, you are already "over budget" compared to simple.
Following amalloy's answer... My machine is slower than yours, and running your single took
Total time 0.41s ( 0.35s elapsed)
I tried:
list = [ 1..10000000]
list1 = [ 1..5000000]
list2 = [ 5000001 .. 10000000 ]
fastest :: IO ()
fastest = print . runEval $ do
res1 <- rpar (sum list1)
res2 <- rpar (sum list2)
return (res1 + res2)
With that I got
c:\Users\peter\Documents\Haskell\practice>parlist 4 +RTS -s -N2
parlist 4 +RTS -s -N2
50000005000000
960,068,544 bytes allocated in the heap
1,398,472 bytes copied during GC
43,832 bytes maximum residency (3 sample(s))
203,544 bytes maximum slop
3 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 1836 colls, 1836 par 0.00s 0.01s 0.0000s 0.0009s
Gen 1 3 colls, 2 par 0.00s 0.00s 0.0002s 0.0004s
Parallel GC work balance: 0.04% (serial 0%, perfect 100%)
TASKS: 4 (1 bound, 3 peak workers (3 total), using -N2)
SPARKS: 2 (0 converted, 0 overflowed, 0 dud, 1 GC'd, 1 fizzled)
INIT time 0.00s ( 0.00s elapsed)
MUT time 0.31s ( 0.33s elapsed)
GC time 0.00s ( 0.01s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 0.31s ( 0.35s elapsed)
Alloc rate 3,072,219,340 bytes per MUT second
Productivity 100.0% of total user, 90.1% of total elapsed
which is faster...

Using GHC's profiling stats/charts to identify trouble-areas / improve performance of Haskell code

TL;DR: Based on the Haskell code and it's associated profiling data below, what conclusions can we draw that let us modify/improve it so we can narrow the performance gap vs. the same algorithm written in imperative languages (namely C++ / Python / C# but the specific language isn't important)?
Background
I wrote the following piece of code as an answer to a question on a popular site which contains many questions of a programming and/or mathematical nature. (You've probably heard of this site, whose name is pronounced "oiler" by some, "yoolurr" by others.) Since the code below is a solution to one of the problems, I'm intentionally avoiding any mention of the site's name or any specific terms in the problem. That said, I'm talking about problem one hundred and three.
(In fact, I've seen many solutions in the site's forums from resident Haskell wizards :P)
Why did I choose to profile this code?
This was the first problem (on said site) in which I encountered a difference in performance (as measured by execution time) between Haskell code vs. C++/Python/C# code (when both use a similar algorithm). In fact, it was the case for all of the problems (thus far; I've done ~100 problems but not sequentially) that an optimized Haskell code was pretty much neck-and-neck with the fastest C++ solutions, ceteris paribus for the algorithm, of course.
However, the posts in the forum for this particular problem would indicate that the same algorithm in these other languages typically require at most one or two seconds, with the longest taking 10-15 sec (assuming the same starting parameters; I'm ignoring the very naive algorithms that take 2-3 min+). In contrast, the Haskell code below required ~50 sec on my (decent) computer (with profiling disabled; with profiling enabled, it takes ~2 min, as you can see below; note: the exec time was identical when compiling with -fllvm). Specs: i5 2.4ghz laptop, 8gb RAM.
In an effort to learn Haskell in a way that it can become a viable substitute to the imperative languages, one of my aims in solving these problems is learning to write code that, to the extent possible, has performance that's on par with those imperative languages. In that context, I still consider the problem as yet unsolved by me (since there's nearly a ~25x difference in performance!)
What have I done so far?
In addition to the obvious step of streamlining the code itself (to the best of my ability), I've also performed the standard profiling exercises that are recommended in "Real World Haskell".
But I'm having a hard time drawing conclusions that that tell me which pieces need to be modified. That's where I'm hoping folks might be able to help provide some guidance.
Description of the problem:
I'd refer you to the website of problem one hundred and three on the aforementioned site but here's a brief summary: the goal is to find a group of seven numbers such that any two disjoint subgroups (of that group) satisfy the following two properties (I'm trying to avoid using the 's-e-t' word for reasons mentioned above...):
no two subgroups sum to the same amount
the subgroup with more elements has a larger sum (in other words, the sum of the smallest four elements must be greater than the sum of the largest three elements).
In particular, we are trying to find the group of seven numbers with the smallest sum.
My (admittedly weak) observations
A warning: some of these comments may well be totally wrong but I wanted to atleast take a stab at interpreting the profiling data based on what I read in Real World Haskell and other profiling-related posts on SO.
There does indeed seem to be an efficiency issue seeing as how one-third of the time is spent doing garbage collection (37.1%). The first table of figures shows that ~172gb is allocated in the heap, which seems horrible... (Maybe there's a better structure / function to use for implementing the dynamic programming solution?)
Not surprisingly, the vast majority (83.1%) of time is spent checking rule 1: (i) 41.6% in the value sub-function, which determines values to fill in the dynamic programming ("DP") table, (ii) 29.1% in the table function, which generates the DP table and (iii) 12.4% in the rule1 function, which checks the resulting DP table to make sure that a given sum can only be calculated in one way (i.e., from one subgroup).
However, I did find it surprising that more time was spent in the value function relative to the table and rule1 functions given that it's the only one of the three which doesn't construct an array or filter through a large number of elements (it's really only performing O(1) lookups and making comparisons between Int types, which you'd think would be relatively quick). So this is a potential problem area. That said, it's unlikely that the value function is driving the high heap-allocation
Frankly, I'm not sure what to make of the three charts.
Heap profile chart (i.e., the first char below):
I'm honestly not sure what is represented by the red area marked as Pinned. It makes sense that the dynamic function has a "spiky" memory allocation because it's called every time the construct function generates a tuple that meets the first three criteria and, each time it's called, it creates a decently large DP array. Also, I'd think that the allocation of memory to store the tuples (generated by construct) wouldn't be flat over the course of the program.
Pending clarification of the "Pinned" red area, I'm not sure this one tells us anything useful.
Allocation by type and allocation by constructor:
I suspect that the ARR_WORDS (which represents a ByteString or unboxed Array according to the GHC docs) represents the low-level execution of the construction of the DP array (in the table function). Nut I'm not 100% sure.
I'm not sure what's the FROZEN and STATIC pointer categories correspond to.
Like I said, I'm really not sure how to interpret the charts as nothing jumps out (to me) as unexpected.
The code and the profiling results
Without further ado, here's the code with comments explaining my algorithm. I've tried to make sure the code doesn't run off of the right-side of the code-box - but some of the comments do require scrolling (sorry).
{-# LANGUAGE NoImplicitPrelude #-}
{-# OPTIONS_GHC -Wall #-}
import CorePrelude
import Data.Array
import Data.List
import Data.Bool.HT ((?:))
import Control.Monad (guard)
main = print (minimum construct)
cap = 55 :: Int
flr = 20 :: Int
step = 1 :: Int
--we enumerate tuples that are potentially valid and then
--filter for valid ones; we perform the most computationally
--expensive step (i.e., rule 1) at the very end
construct :: [[Int]]
construct = {-# SCC "construct" #-} do
a <- [flr..cap] --1st: we construct potentially valid tuples while applying a
b <- [a+step..cap] --constraint on the upper bound of any element as implied by rule 2
c <- [b+step..a+b-1]
d <- [c+step..a+b-1]
e <- [d+step..a+b-1]
f <- [e+step..a+b-1]
g <- [f+step..a+b-1]
guard (a + b + c + d - e - f - g > 0) --2nd: we screen for tuples that completely conform to rule 2
let nn = [g,f,e,d,c,b,a]
guard (sum nn < 285) --3rd: we screen for tuples of a certain size (a guess to speed things up)
guard (rule1 nn) --4th: we screen for tuples that conform to rule 1
return nn
rule1 :: [Int] -> Bool
rule1 nn = {-# SCC "rule1" #-}
null . filter ((>1) . snd) --confirm that there's only one subgroup that sums to any given sum
. filter ((length nn==) . snd . fst) --the last column us how many subgroups sum to a given sum
. assocs --run the dynamic programming algorithm and generate a table
$ dynamic nn
dynamic :: [Int] -> Array (Int,Int) Int
dynamic ns = {-# SCC "dynamic" #-} table
where
(len, maxSum) = (length &&& sum) ns
table = array ((0,0),(maxSum,len))
[ ((s,i),x) | s <- [0..maxSum], i <- [0..len], let x = value (s,i) ]
elements = listArray (0,len) (0:ns)
value (s,i)
| i == 0 || s == 0 = 0
| s == m = table ! (s,i-1) + 1
| s > m = s <= sum (take i ns) ?:
(table ! (s,i-1) + table ! ((s-m),i-1), 0)
| otherwise = 0
where
m = elements ! i
Stats on heap allocation, garbage collection and time elapsed:
% ghc -O2 --make 103_specialsubset2.hs -rtsopts -prof -auto-all -caf-all -fforce-recomp
[1 of 1] Compiling Main ( 103_specialsubset2.hs, 103_specialsubset2.o )
Linking 103_specialsubset2 ...
% time ./103_specialsubset2.hs +RTS -p -sstderr
zsh: permission denied: ./103_specialsubset2.hs
./103_specialsubset2.hs +RTS -p -sstderr 0.00s user 0.00s system 86% cpu 0.002 total
% time ./103_specialsubset2 +RTS -p -sstderr
SOLUTION REDACTED
172,449,596,840 bytes allocated in the heap
21,738,677,624 bytes copied during GC
261,128 bytes maximum residency (74 sample(s))
55,464 bytes maximum slop
2 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 327548 colls, 0 par 27.34s 41.64s 0.0001s 0.0092s
Gen 1 74 colls, 0 par 0.02s 0.02s 0.0003s 0.0013s
INIT time 0.00s ( 0.01s elapsed)
MUT time 53.91s ( 70.60s elapsed)
GC time 27.35s ( 41.66s elapsed)
RP time 0.00s ( 0.00s elapsed)
PROF time 0.00s ( 0.00s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 81.26s (112.27s elapsed)
%GC time 33.7% (37.1% elapsed)
Alloc rate 3,199,123,974 bytes per MUT second
Productivity 66.3% of total user, 48.0% of total elapsed
./103_specialsubset2 +RTS -p -sstderr 81.26s user 30.90s system 99% cpu 1:52.29 total
Stats on time spent per cost-centre:
Wed Dec 17 23:21 2014 Time and Allocation Profiling Report (Final)
103_specialsubset2 +RTS -p -sstderr -RTS
total time = 15.56 secs (15565 ticks # 1000 us, 1 processor)
total alloc = 118,221,354,488 bytes (excludes profiling overheads)
COST CENTRE MODULE %time %alloc
dynamic.value Main 41.6 17.7
dynamic.table Main 29.1 37.8
construct Main 12.9 37.4
rule1 Main 12.4 7.0
dynamic.table.x Main 1.9 0.0
individual inherited
COST CENTRE MODULE no. entries %time %alloc %time %alloc
MAIN MAIN 55 0 0.0 0.0 100.0 100.0
main Main 111 0 0.0 0.0 0.0 0.0
CAF:main1 Main 108 0 0.0 0.0 0.0 0.0
main Main 110 1 0.0 0.0 0.0 0.0
CAF:main2 Main 107 0 0.0 0.0 0.0 0.0
main Main 112 0 0.0 0.0 0.0 0.0
CAF:main3 Main 106 0 0.0 0.0 0.0 0.0
main Main 113 0 0.0 0.0 0.0 0.0
CAF:construct Main 105 0 0.0 0.0 100.0 100.0
construct Main 114 1 0.6 0.0 100.0 100.0
construct Main 115 1 12.9 37.4 99.4 100.0
rule1 Main 123 282235 0.6 0.0 86.5 62.6
rule1 Main 124 282235 12.4 7.0 85.9 62.6
dynamic Main 125 282235 0.2 0.0 73.5 55.6
dynamic.elements Main 133 282235 0.3 0.1 0.3 0.1
dynamic.len Main 129 282235 0.0 0.0 0.0 0.0
dynamic.table Main 128 282235 29.1 37.8 72.9 55.5
dynamic.table.x Main 130 133204473 1.9 0.0 43.8 17.7
dynamic.value Main 131 133204473 41.6 17.7 41.9 17.7
dynamic.value.m Main 132 132640003 0.3 0.0 0.3 0.0
dynamic.maxSum Main 127 282235 0.0 0.0 0.0 0.0
dynamic.(...) Main 126 282235 0.1 0.0 0.1 0.0
dynamic Main 122 282235 0.0 0.0 0.0 0.0
construct.nn Main 121 12683926 0.0 0.0 0.0 0.0
CAF:main4 Main 102 0 0.0 0.0 0.0 0.0
construct Main 116 0 0.0 0.0 0.0 0.0
construct Main 117 0 0.0 0.0 0.0 0.0
CAF:cap Main 101 0 0.0 0.0 0.0 0.0
cap Main 119 1 0.0 0.0 0.0 0.0
CAF:flr Main 100 0 0.0 0.0 0.0 0.0
flr Main 118 1 0.0 0.0 0.0 0.0
CAF:step_r1dD Main 99 0 0.0 0.0 0.0 0.0
step Main 120 1 0.0 0.0 0.0 0.0
CAF GHC.IO.Handle.FD 96 0 0.0 0.0 0.0 0.0
CAF GHC.Conc.Signal 93 0 0.0 0.0 0.0 0.0
CAF GHC.IO.Encoding 91 0 0.0 0.0 0.0 0.0
CAF GHC.IO.Encoding.Iconv 82 0 0.0 0.0 0.0 0.0
Heap profile:
Allocation by type:
Allocation by constructors:
There is a lot that can be said. In this answer I'll just comment on the nested list comprehensions in the construct function.
To get an idea on what's going on in construct we'll isolate it and compare it to a nested loop version that you would write in an imperative language. We've removed the rule1 guard to test only the generation of lists.
-- List.hs -- using list comprehensions
import Control.Monad
cap = 55 :: Int
flr = 20 :: Int
step = 1 :: Int
construct :: [[Int]]
construct = do
a <- [flr..cap]
b <- [a+step..cap]
c <- [b+step..a+b-1]
d <- [c+step..a+b-1]
e <- [d+step..a+b-1]
f <- [e+step..a+b-1]
g <- [f+step..a+b-1]
guard (a + b + c + d - e - f - g > 0)
guard (a + b + c + d + e + f + g < 285)
return [g,f,e,d,c,b,a]
-- guard (rule1 nn)
main = do
forM_ construct print
-- Loops.hs -- using imperative looping
import Control.Monad
loop a b f = go a
where go i | i > b = return ()
| otherwise = do f i; go (i+1)
cap = 55 :: Int
flr = 20 :: Int
step = 1 :: Int
main =
loop flr cap $ \a ->
loop (a+step) cap $ \b ->
loop (b+step) (a+b-1) $ \c ->
loop (c+step) (a+b-1) $ \d ->
loop (d+step) (a+b-1) $ \e ->
loop (e+step) (a+b-1) $ \f ->
loop (f+step) (a+b-1) $ \g ->
if (a+b+c+d-e-f-g > 0) && (a+b+c+d+e+f+g < 285)
then print [g,f,e,d,c,b,a]
else return ()
Both programs were compiled with ghc -O2 -rtsopts and run with prog +RTS -s > out.
Here is a summary of the results:
Lists.hs Loops.hs
Heap allocation 44,913 MB 2,740 MB
Max. Residency 44,312 44,312
%GC 5.8 % 1.7 %
Total Time 9.48 secs 1.43 secs
As you can see, the loop version, which is the way you would write this in a language like C,
wins in every category.
The list comprehension version is cleaner and more composable but also less performant than direct iteration.

Resources