How to merge Hoopl graph blocks / how to pass through the blocks - haskell

I'm trying to introduce Hoopl into some compiler and faced some problem: creating
a graph for Hoopl makes the nodes to appear in order of labels that were introduced.
Eg:
(define (test) (if (eq? (random) 1 ) 2 (if (eq? (random) 2 ) 3 0) ) )
"compiles" to
L25: call-direct random -> _tmp7_6
branch L27
L26: return RETVAL
L27: iconst 1 _tmp8_7
branch L28
L28: call-direct eq? _tmp7_6, _tmp8_7 -> _tmp4_8
branch L29
L29: cond-branch _tmp4_8 L30 L31
L30: iconst 2 RETVAL
branch L26
L31: call-direct random -> _tmp12_10
branch L32
L32: iconst 2 _tmp13_11
branch L33
L33: call-direct eq? _tmp12_10, _tmp13_11 -> _tmp9_12
branch L34
L34: cond-branch _tmp9_12 L36 L37
L35: assign RETVAL _tmp6_15
branch L26
L36: iconst 3 _tmp6_15
branch L35
L37: iconst 0 _tmp6_15
branch L35
The order of instructions (in order of showGraph) is strange, because of order of
recursive graph building from the AST. In order to generate code I need to reorder blocks in more natural way, say place return RETVAL to the end of function, merge blocks like this
branch Lx:
Lx: ...
into the one block, and so on. Seems that I need something like:
block1 = get block
Ln = get_last jump
block2 = find block Ln
if (some conditions)
remove block2
replace blick1 (merge block1 block2)
I'm totally confused how to perform this with Hoopl. Of course, I may dump all the nodes
and then perform the transformations outside the Hoopl framework, but I believe that this
is bad idea.
May someone give me any glue? I did not find any useful examples. Something similar is performed in Lambdachine project, but seems too complicated.
There is also an another question. Is there any point to make all Call instruction non-local?
What the point of this considering that implementation of Call is not changing any local
variables and always transfer the control to the next instruction of the block? If Call instructions are defined like
data Insn e x where
Call :: [Expr] -> Expr -> Label :: Insn O C -- last instruction of the block
that cause the graph to looks even more strange. So I use
-- what the difference with any other primitive, like "add a b -> c"
Call :: [Expr] -> Expr -> Label :: Insn O O
May be I'm wrong with this?

It is possible to implement the "block merging" using HOOPL. Your question is too generic, so I give you a plan:
Determine what analysis type this optimization requires (either forward or backward)
Design the analysis lattice
Design the transfer function
Design the rewriting function
Create a pass
Merge the pass with other passes of the same direction so they interleave
Run the pass using fuel
Convert optimized graph back to the form you need
With which stages do you have problems? Steps 1 and 2 should be rather straightforward once you've read the papers.
You should also understand the general concept of basic block - why instructions are merged into blocks, why control flow graph consists of blocks and not of individual instructions, why analysis is performed on blocks and not on individual instructions.
Your rewrite function should use the facts to rewrite the last node in the block. So the fact lattice should include not only "information about reachability", but also the destination blocks themselves.

I've found and tried a couple of ways to do the trick:
Using foldBlockNodesF3 function or other foldBlockNodes... functions
Using preorder_dfs* functions (like in Lambdachine project)
Build the graph with larger blocks from the start
The last option is not useful for me, because FactBase is linked with labels, so every instruction that change a liveness of the variables should have a label for using in the following analysis.
So, my final solution is to use foldBlockNodesF3 function and linearize the graph and delete extra labels manually with simultaneous register allocation

Related

Heap profiling in Haskell. Am I having space leaks?

I am writing a small snake game in Haskell as sort of a guided tutorial for beginners. The "rendering" just takes a Board and produces a Data.ByteString.Builder which is printed in the terminal. (the html profiles are pushed to the repo, you can inspect them without compiling the programm)
The problem
The problem I have is that the heap profiling looks weird: There are many spikes, and suddenly Builder, PAP and BuildStep take as same memory as the rest of the program. Considering that rendering is happenning 10 times in a second (i.e. every second we produce 10 builders), it seems inconsistent that every once in a while the builder just takes that much memory. I don't know if this is considered an space leak, since there is no thunks in the profile, but the PAP doesn't look right (I don't know...)
Implementation
The board is represented as an inmutable array of builders indexed by coordinaates (tuples) type Board = Array (Int, Int) Builder (essentialy, what should be printed in each coordinate). The function which converts the board into a builder is the expected strict fold which handle new lines using height and width of the board.
toBuilder :: RenderState -> Builder
-- |- The Array (Int, Int) Builder
toBuilder (RenderState b binf#(BoardInfo h w) gOver s) =
-- ^^^ height and width
if gOver
then ppScore s <> fst (boardToString $ emptyGrid binf) -- Not interesting. Case of game over print build an empty grid
else ppScore s <> fst (boardToString b) -- print the current board
where
boardToString = foldl' fprint (mempty, 0) -- concatenate builders and count the number, such that when #width builders have been concatenated, add a new line.
fprint (!s, !i) cell =
if ((i + 1) `mod` w) == 0
then (s <> cell <> B.charUtf8 '\n', i + 1 )
else (s <> cell , i + 1)
Up to the .prof file this function take most of the time and space (92%, which is expected). Moreover, this is the only part of the code that produces a big builder, so the problem should be here.
The buffering mode
The above profile happens when BufferMode is set to LineBuffering (default), but interestingly if I change it to NoBuffering then the profile looks the same but a thunk appears and the builder disappear...
The questions
I have reached a point which I don't know whats going on, hence my questions are a little bit vague:
Is my code with line buffering (first profile) actually leaking? No thunk appears but the PAP eating so much memory looks like a warning
The second profile clearly(?) leaks, is there an standard way to inspect which part of the code is producing the thunk?
Am I completely missing something, and actually the profile looks fine?
In case anyone is interested, I think I've found the problem. It is the terminal speed... If I run an smaller board size or a slower rendering time (the picture is for a 50x70 board with 10 renders a second), then the memory usage is completely normal.
What I think is happening, is that the board is printed into the console using B.hPutBuilder stdout, this action takes shorter than the console to actually print it, so the haskell thread continues and creates another board which should wait to be printed because the console is busy. I guess this leads to some how, two boards living in memory for a short time.
Other guesses are welcome!

How can I check if a subroutine has been run before?

I've currently got a Fortran program using the 2008 standard that has a subroutine that loads data from a file if it's the first run of the subroutine. On all runs, the subroutine interpolates over the data and returns two values, but the goal is to avoid reloading the same data from the file.
Initially, I had something like this:
module myModel_mod
use myModelLoader_mod
use linear_interpolation_module
implicit none
contains
subroutine myModel(A, B, C, modelFile, D, E)
real :: A, B, C, D, E
character(len=*) :: modelFile
type(linear_interp_3d), save :: F, G
real, dimension(:), allocatable, save :: As, Bs, Cs
if (.not. allocated(As)) then
call loadModel(modelFile, As, Bs, Cs)
.
. (processing of loaded data and creation of F and G occurs here)
.
end if
call F%evaluate(A, B, C, D)
call G%evaluate(A, B, C, E)
end subroutine
end module
My module makes use of the finterp library. It interpolates a 3D gridded data set for two values, D and E. I'm having to rewrite this part of the code to fix a memory leak, and I'd like to fix where As, Bs, and Cs are left allocated. They don't need the SAVE attribute; it's left over from an older implementation of gridded interpolation I was using. However, if I remove it, then by my understanding, I can't check if As is allocated to see if the subroutine has been run before.
I've considered creating a logical flag variable with the SAVE attribute that gets set when the subroutine runs for the first time, but I believe that would still result in a small memory leak, albeit much smaller than what I currently have.
Is there a way to check for the "first run" condition in a way that doesn't result in a memory leak?
Memory leaks are impossible with allocatable arrays in Fortran. Unsaved allocatable local variables are deallocated on exit from the procedure. The saved ones are retained but cannot grow without limits. You have only a limited number of local allocatable variables that you can allocate and that then take some memory, but they will not grow without control.
To answer your title question, I use the approach with a saved logical variable and if it is about allocating local or module arrays I just use if (allocated()).

Is there inherent "cost of carry" of garbage thunks in Haskell?

I often see high number of cycles spent in GC when running GHC-compiled programs.
These numbers tend to be order of magnitude higher than my JVM experience suggests they should be. In particular, number of bytes "copied" by GC seems to be vastly larger than amounts of data I'm computing.
Is such difference between non- and strict languages fundamental?
tl;dr: Most of the stuff that the JVM does in stack frames, GHC does on the heap. If you wanted to compare GHC heap/GC stats with the JVM equivalent, you'd really need to account for some portion of the bytes/cycles the JVM spends pushing arguments on the stack or copying return values between stack frames.
Long version:
Languages targeting the JVM typically make use of its call stack. Each invoked method has an active stack frame that includes storage for the parameters passed to it, additional local variables, and temporary results, plus room for an "operand stack" used for passing arguments to and receiving results from other methods it calls.
As a simple example, if the Haskell code:
bar :: Int -> Int -> Int
bar a b = a * b
foo :: Int -> Int -> Int -> Int
foo x y z = let u = bar y z in x + u
were compiled to JVM, the byte code would probably look something like:
public static int bar(int, int);
Code:
stack=2, locals=2, args_size=2
0: iload_0 // push a
1: iload_1 // push b
2: imul // multiply and push result
3: ireturn // pop result and return it
public static int foo(int, int, int);
Code:
stack=2, locals=4, args_size=3
0: iload_1 // push y
1: iload_2 // push z
2: invokestatic bar // call bar, pushing result
5: istore_3 // pop and save to "u"
6: iload_0 // push x
7: iload_3 // push u
8: iadd // add and push result
9: ireturn // pop result and return it
Note that calls to built-in primitives like imul and user-defined methods like bar involve copying/pushing the parameter values from local storage to the operand stack (using iload instructions) and then invoking the primitive or method. Return values then need to be saved/popped to local storage (with istore) or returned to the caller with ireturn; occasionally, a return value can be left on the stack to serve as an operand for another method invocation. Also, while it's not explicit in the byte code, the ireturn instruction involves a copy, from the callee's operand stack to the caller's operand stack. Of course, in actual JVM implementations, various optimizations are presumably possible to reduce copying.
When something else eventually calls foo to produce a computation, for example:
some_caller t = foo (1+3) (2+4) t + 1
the (unoptimized) code might look like:
iconst_1
iconst_3
iadd // put 1+3 on the stack
iconst_2
iconst_4
iadd // put 2+4 on the stack
iload_0 // put t on the stack
invokestatic foo
iconst 1
iadd
ireturn
Again, subexpressions are evaluated with a lot of pushing and popping on the operand stack. Eventually, foo is invoked with its arguments pushed on the stack and its result popped off for further processing.
All of this allocation and copying takes place on this stack, so there's no heap allocation involved in this example.
Now, what happens if that same code is compiled with GHC 8.6.4 (without optimization and on an x86_64 architecture for the sake of concreteness)? Well, the pseudocode for the generated assembly is something like:
foo [x, y, z] =
u = new THUNK(sat_u) // thunk, 32 bytes on heap
jump: (+) x u
sat_u [] = // saturated closure for "bar y z"
push UPDATE(sat_u) // update frame, 16 bytes on stack
jump: bar y z
bar [a, b] =
jump: (*) a b
The calls/jumps to the (+) and (*) "primitives" are actually more complicated than I've made them out to be because of the typeclass that's involved. For example, the jump to (+) looks more like:
push CONTINUATION(\f -> f x u) // continuation, 24 bytes on stack
jump: (+) dNumInt // get the right (+) from typeclass instance
If you turn on -O2, GHC optimizes away this more complicated call, but it also optimizes away everything else that's interesting about this example, so for the sake of argument, let's pretend the pseudocode above is accurate.
Again, foo isn't of much use until someone calls it. For the some_caller example above, the portion of code that calls foo will look something like:
some_caller [t] =
...
foocall = new THUNK(sat_foocall) // thunk, 24 bytes on heap
...
sat_foocall [] = // saturated closure for "foo (1+3) (2+4) t"
...
v = new THUNK(sat_v) // thunk "1+3", 16 bytes on heap
w = new THUNK(sat_w) // thunk "2+4", 16 bytes on heap
push UPDATE(sat_foocall) // update frame, 16 bytes on stack
jump: foo sat_v sat_w t
sat_v [] = ...
sat_w [] = ...
Note that nearly all of this allocation and copying takes place on the heap, rather than the stack.
Now, let's compare these two approaches. At first blush, it looks like the culprit really is lazy evaluation. We're creating these thunks all over the place that wouldn't be necessary if evaluation was strict, right? But let's look at one of these thunks more carefully. Consider the thunk for sat_u in the definition of foo. It's 32 bytes / 4 words with the following contents:
// THUNK(sat_u)
word 0: ptr to sat_u info table/code
1: space for return value
// variables we closed over:
2: ptr to "y"
3: ptr to "z"
The creation of this thunk isn't fundamentally different than the JVM code:
0: iload_1 // push y
1: iload_2 // push z
2: invokestatic bar // call bar, pushing result
5: istore_3 // pop and save to "u"
Instead of pushing y and z onto the operand stack, we loaded them into a heap-allocated thunk. Instead of popping the result off the operand stack into our stack frame's local storage and managing stack frames and return addresses, we left space for the result in the thunk and pushed a 16-byte update frame onto the stack before transferring control to bar.
Similarly, in the call to foo in some_caller, instead of evaluating the argument subexpressions by pushing constants on the stack and invoking primitives to push results on the stack, we created thunks on the heap, each of which included a pointer to info table / code for invoking primitives on those arguments and space for the return value; an update frame replaced the stack bookkeeping and result copying implicit in the JVM version.
Ultimately, thunks and update frames are GHC's replacement for stack-based parameter and result passing, local variables, and temporary workspace. A lot of activity that takes place in JVM stack frames takes place in the GHC heap.
Now, most of the stuff in JVM stack frames and on the GHC heap quickly becomes garbage. The main difference is that in the JVM, stack frames are automatically tossed out when a function returns, after the runtime has copied the important stuff out (e.g., return values). In GHC, the heap needs to be garbage collected. As others have noted, the GHC runtime is built around the idea that the vast majority of heap objects will immediately become garbage: a fast bump allocator is used for initial heap object allocation, and instead of copying out the important stuff every time a function returns (as for the JVM), the garbage collector copies it out when the bump heap gets kind of full.
Obviously, the above toy example is ridiculous. In particular, things are going to get much more complicated when we start talking about code that operates on Java objects and Haskell ADTs, rather than Ints. However, it serves to illustrate the point that a direct comparison of heap usage and GC cycles between GHC and JVM doesn't make a whole lot of sense. Certainly, an exact accounting doesn't really seem possible as the JVM and GHC approaches are too fundamentally different, and the proof would be in real-world performance. At the very least, an apples-to-apples comparison of GHC heap usage and GC stats needs to account for some portion of the cycles the JVM spends pushing, popping, and copying values between operand stacks. In particular, at least some fraction of JVM return instructions should count towards GHC's "bytes copied".
As for the contribution of "laziness" to heap usage (and heap "garbage" in particular), it seems hard to isolate. Thunks really play a dual role as a replacement for stack-based operand passing and as a mechanism for deferred evaluation. Certainly a switch from laziness to strictness can reduce garbage -- instead of first creating a thunk and then eventually evaluating it to another closure (e.g., a constructor), you can just create the evaluated closure directly -- but that just means that instead of your simple program allocating a mind-blowing 172 gigabytes on the heap, maybe the strict version "only" allocates a modest 84 gigabytes.
As far as I can see, the specific contribution of lazy evaluation to "bytes copied" should be minimal -- if a closure is important at GC time, it will need to be copied. If it's still an unevaluated thunk, the thunk will be copied. If it's been evaluated, just the final closure will need to be copied. If anything, since thunks for complicated structures are much smaller than their evaluated versions, laziness should typically reduce bytes copied. Instead, the usual big win with strictness is that it allows certain heap objects (or stack objects) to become garbage faster so we don't end up with space leaks.
No, laziness does not inherently lead to a large amount of copying in GC. The programmer's failure to manage laziness properly, however, can certainly do so. For example, if a persistent data structure ends up full of chains of thunks due to lazy modification, then it will end up badly bloated.
Another major issue you may be encountering, as Daniel Wagner mentioned, is the cost of immutability. While it is certainly possible to program with mutable structures in Haskell, it is much more idiomatic to work with immutable ones when possible. Immutable structure designs have various trade-offs. For example, ones designed for high performance when used persistently tend to have low branching factors to increase sharing, which leads to some bloat when they're used ephemerally.

Haskell 'count occurrences' function

I implemented a count function in Haskell and I am wondering will this behave badly on large lists :
count :: Eq a => a -> [a] -> Int
count x = length . filter (==x)
I believe the length function runs in linear time, is this correct?
Edit: Refactor suggested by #Higemaru
Length runs in linear time to the size of the list, yes.
Normally, you would be worried that your code had to take two passes through the list: first one to filter and then one to count the length of the resulting list. However, I believe this does not happen here because filter is not strict on the structure of the list. Instead, the length function forces the elements of the filtered list as it goes along, doing the actual count in one pass.
I think you can make it slightly shorter
count :: Eq a => a -> [a] -> Int
count x = length . filter (x==)
(I would have written a (lowly) comment if I could)
That really depends on the list. For a normal, lazily evaluated list of Ints on my computer, I see this function running in about 2 seconds for 10^9 elements, 0.2 seconds for 10^8, and 0.3 seconds for 10^7, so it appears to run in linear time. You can check this yourself by passing the flags +RTS -s -RTS to your executable when running it from the command line.
I also tried running it with more cores, but it doesn't seem to do anything but increase the memory usage a bit.
An added bonus of the lazy computation is that you only make a single pass over the list. filter and length get turned into a single loop by the compiler (with optimizations turned on), so you save memory and efficiency.

No error message in Haskell

Just out of curiosity, I made a simple script to check speed and memory efficiency of constructing a list in Haskell:
wasteMem :: Int -> [Int]
wasteMem 0 = [199]
wasteMem x = (12432483483467856487256348746328761:wasteMem (x-1))
main = do
putStrLn("hello")
putStrLn(show (wasteMem 10000000000000000000000000000000000))
The strange thing is, when I tried this, it didn't run out of memory or stack space, it only prints [199], the same as running wasteMem 0. It doesn't even print an error message... why? Entering this large number in ghci just prints the number, so I don't think it's a rounding or reading error.
Your program is using a number greater than maxBound :: Int32. This means it will behave differently on different platforms. For GHC x86_64 Int is 64 bits (32 bits otherwise, but the Haskell report only promises 29 bits). This means your absurdly large value (1x10^34) is represented as 4003012203950112768 for me and zero for you 32-bit folks:
GHCI> 10000000000000000000000000000000000 :: Int
4003012203950112768
GHCI> 10000000000000000000000000000000000 :: Data.Int.Int32
0
This could be made platform independent by either using a fixed-size type (ex: from Data.Word or Data.Int) or using Integer.
All that said, this is a poorly conceived test to begin with. Haskell is lazy, so the amount of memory consumed by wastedMem n for any value n is minimal - it's just a thunk. Once you try to show this result it will grab elements off the list one at a time - first generating "[12432483483467856487256348746328761, and leaving the rest of the list as a thunk. The first value can be garbage collected before the second value is even considered (a constant-space program).
Adding to Thomas' answer, if you really want to waste space, you have to perform an operation on the list, which needs the whole list in memory at once. One such operation is sorting:
print . sort . wasteMem $ (2^16)
Also note that it's almost impossible to estimate the run-time memory usage of your list. If you want a more predictable memory benchmark, create an unboxed array instead of a list. This also doesn't require any complicated operation to ensure that everything stays in memory. Indexing a single element in an array already makes sure that the array is in memory at least once.

Resources