I'm trying to understand how GHC Haskell synchronises the computation of "basic" values (i.e. not IORef, TVar, etc.) between threads. I have searched for information about this but haven't found anything clear.
Take the following example program:
import Control.Concurrent
expensiveFunction x = sum [1..x] -- Just an example
val = expensiveFunction 12345
thread1 = print val
thread2 = print val
main = do
forkOS thread1
forkOS thread2
I understand that the value val will initially be represented by an unevaluated closure. In order to print val, the program must first evaluate it. Once a toplevel binding has been evaluated it should not need to be evaluated again.
Is the representation for "val" even shared by separate threads?
If for some reason thread1 completes evaluation first, can it convey the final computed value to thread2 by swapping out the pointer? How would that be synchronised?
If thread1 is busy evaluating when thread2 wants the value, does thread2 wait for it to finish or do they both race to evaluate it first?
In GHC-compiled programs, values go through three(-ish) phases of evaluation:
Thunk. This is where they start.
Black hole. When forced, a thunk is converted to a black hole and computation begins. Other threads that request the value of a black hole will instead add themselves to a notification list for when the black hole is updated. (Also, if the thunk itself tries to access the black hole, it will short-circuit to an exception instead of waiting forever.)
Evaluated. When the computation finishes, its last task is to update the black hole to a plain value (well, WHNF value, anyway).
The pointer that is getting updated during these phase transitions is shared with other threads and not protected from race conditions. This means that, very rarely, it is possible for two (or more) threads to both see a pointer in phase 1 and for both to execute the 1 -> 2 transition; in that case, both will evaluate the thunk, and the transition 2 -> 3 will also happen twice. Notably, though, the 1 -> 2 transition is typically much faster than the computation it is replacing (essentially just a memory access or two), in part exactly so that the race is difficult to trigger.
Because the language is pure, the racing threads will come to the same answer. So there is no semantic difficulty here. But in some rare cases, a little bit of work may be duplicated. It is very, very rare that the overhead of a lock on every 1 -> 2 transition would be better than this slight duplication. (If you find it is in your case, consider manually protecting the evaluation of whichever expensive thing is being shared!)
Corollary: great care must be taken with the unsafe IO a -> a family of functions; some guarantee synchronization of the evaluation of the resulting a and some don't. If your IO a action is not as pure as you promised it is, and a race causes it to be executed twice, all manner of strange heisenbugs can occur.
How does CAS works? How does it work with garbage collector? Where is the problem and how does it work without garbage collector?
I was reading a presentation about CAS and using it on "write rarely, read many" problem and there was said, that use of CAS is convenient while you can use garbage collector, but there is problem (not specified) while you can not use garbage collector.
Can you tell me something about this? If you can sum up principle of CAS at first, it would be appreciated.
Ok, so CAS is an atomic instruction, that is there is special hardware support for it.
Its main use is to not use locks at all when implementing your data structures and other operations, since using locks, if a thread takes a page fault, a cache miss or is being descheduled by the OS for instance the thread takes the lock with it and all the rest of the threads are blocked. This obviously yields serious performance issues.
CAS is the core of lock-free programming and here and here.
CAS basically is the following:
CAS(CURRENT_VALUE, OLD_VALUE, NEW_VALUE) <=>
if CURRENT_VALUE==OLD_VALUE then CURRENT_VALUE = NEW_VALUE
You have a variable (e.g. class variable) and you have no clue if it was modified or not by other threads in the time you read from it and you want to write to it.
CAS helps you here on the write part since this CAS is done atomically (in hardware) and no lock is being implemented there, thus even if your thread goes to sleep the rest of the threads can operate on your data structure.
The issue with CAS on non-GC systems is the ABA problem and an example is the following:
You have a single linked list: HEAD->A->X->Y->Z
Thread 1: let's read A: localA = A; localA_Value = A.Value (let's say 5)
Thread 2: let's delete A: HEAD->A->X->Y->Z
Thread 3: let's add a new node at start (the malloc will find the right spot right were old A was): HEAD->A'->X->Y->Z (A'.Value = 10)
Thread 1 resumes and wants to swap A with B: CAS(localA, A', B) => but this thread expects that if CAS passes the value of A to be 5; wrong: since CAS passes given that localA and A' have the same memory location but localA.Value!=A'.Value => thus the operation shouldn't be performed.
The thing is that in GC enabled systems this will never happen since localA holds a reference to that memory location and thus A' will never get allocated to that memory location.
In Haskell, I created a Vector of 1000000 IntMaps. I then used Gloss to render a picture in a way that accesses random intmaps from that vector.
That is, I had keep every single one of them in memory. The rendering function itself is very lightweight, so the performance was supposed to be good.
Yet, the program was running at 4fps. Upon profiling, I noticed 95% of the time was spent on GC. Fair enough:
The GC is crazily scanning my vector, even though it never changes.
Is there any way to tell GHC "this big value is needed and will not change - do not try to collect anything inside it".
Edit: the program below is sufficient to replicate the issue.
import qualified Data.IntMap as Map
import qualified Data.Vector as Vec
import Graphics.Gloss
import Graphics.Gloss.Interface.IO.Animate
import System.Random
main = do
let size = 10000000
let gen i = Map.fromList $ zip [mod i 10..0] [0..mod i 10]
let vec = Vec.fromList $ map gen [0..size]
let draw t = do
rnd <- randomIO :: IO Int
let empty = Map.null $ vec Vec.! mod rnd size
let rad = if empty then 10 else 50
return $ translate (20 * cos t) (20 * sin t) (circle rad)
animateIO (InWindow "hi" (256,256) (1,1)) white draw
This accesses a random map on a huge vector and draws a rotating circle whose radius depend on whether the map is empty.
Despite that logic being very simple, the program struggles at around 1 FPS here.
gloss is the culprit here.
First, a little background on GHC's garbage collector. GHC uses (by default) a generational, copying garbage collector. This means that the heap consists of several memory areas called generations. Objects are allocated into the youngest generation. When a generation becomes full, it is scanned for live objects and the live objects are copied into the next older generation, and then the generation that was scanned is marked as empty. When the oldest generation becomes full, the live objects are instead copied into a new version of the oldest generation.
An important fact to take away from this is that the GC only ever examines live objects. Dead objects are never touched at all. This is great when collecting generations that are mostly garbage, as often happens in the youngest generation. It's not good if long-lived data undergoes many GCs, as it will be copied repeatedly. (It can also be counterintuitive to those used to malloc/free-style memory management, where allocation and deallocation are both quite expensive, but leaving objects allocated for a long time has no direct cost.)
Now, the "generational hypothesis" is that most objects are either short-lived or long-lived. The long-lived objects will quickly end up in the oldest generation since they are alive at every collection. Meanwhile, most of the short-lived objects that are allocated will never survive the youngest generation; only those that happen to be alive when it is collected will be promoted to the next generation. Similarly, most of those short-lived objects that do get promoted will not survive to the third generation. As a result, the oldest generation that holds the long-lived objects should fill up very slowly, and its expensive collections that have to copy all the long-lived objects should occur rarely.
Now, all of this is actually true in your program, except for one problem:
let displayFun backendRef = do
-- extract the current time from the state
timeS <- animateSR `getsIORef` AN.stateAnimateTime
-- call the user action to get the animation frame
picture <- frameOp (double2Float timeS)
renderS <- readIORef renderSR
portS <- viewStateViewPort <$> readIORef viewSR
windowSize <- getWindowDimensions backendRef
-- render the frame
displayPicture
windowSize
backColor
renderS
(viewPortScale portS)
(applyViewPortToPicture portS picture)
-- perform GC every frame to try and avoid long pauses
performGC
gloss tells the GC to collect the oldest generation every frame!
This might be a good idea if those collections are expected to take less time than the delay between frames anyways, but it's clearly not a good idea for your program. If you remove that performGC call from gloss, then your program runs quite quickly. Presumably if you let it run for long enough, then the oldest generation will eventually fill up and you might get a delay of a few tenths of a second as the GC copies all your long-lived data, but that's much better than paying that cost every frame.
All that said, there is a ticket #9052 about adding a stable generation, which would also suit your needs nicely. See there for more details.
I would try compiling -with-rtsopts and then playing with the heap (-H) and/or allocator (-A) options. Those greatly influence how the GC works.
More info here: https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/runtime-control.html
To add to Reid's answer, I've found performMinorGC (added in https://ghc.haskell.org/trac/ghc/ticket/8257) to be the best of both worlds here.
Without any explicit GC scheduling, I still get frequent collection-related frame drops as the nursery becomes exhausted. But performGC indeed becomes performance-killing as soon as there is any significant long-lived memory usage.
performMinorGC does what we want, ignoring long-term memory and cleaning up the garbage from each frame predictably - especially if you tune -H and -A to ensure that the per-frame garbage fits in the nursery.
The docs say:
In a concurrent program, IORef operations may appear out-of-order to
another thread, depending on the memory model of the underlying
processor architecture...The implementation is required to ensure that
reordering of memory operations cannot cause type-correct code to go
wrong. In particular, when inspecting the value read from an IORef,
the memory writes that created that value must have occurred from the
point of view of the current thread.
Which I'm not even entirely sure how to parse. Edward Yang says
In other words, “We give no guarantees about reordering, except that
you will not have any type-safety violations.” ...
the last sentence remarks that an IORef is not allowed to point to
uninitialized memory
So... it won't break the whole haskell; not very helpful. The discussion from which the memory model example arose also left me with questions (even Simon Marlow seemed a bit surprised).
Things that seem clear to me from the documentation
within a thread an atomicModifyIORef "is never observed to take place ahead of any earlier IORef operations, or after any later IORef operations" i.e. we get a partial ordering of: stuff above the atomic mod -> atomic mod -> stuff after. Although, the wording "is never observed" here is suggestive of spooky behavior that I haven't anticipated.
A readIORef x might be moved before writeIORef y, at least when there are no data dependencies
Logically I don't see how something like readIORef x >>= writeIORef y could be reordered
What isn't clear to me
Will newIORef False >>= \v-> writeIORef v True >> readIORef v always return True?
In the maybePrint case (from the IORef docs) would a readIORef myRef (along with maybe a seq or something) before readIORef yourRef have forced a barrier to reordering?
What's the straightforward mental model I should have? Is it something like:
within and from the point of view of an individual thread, the
ordering of IORef operations will appear sane and sequential; but the
compiler may actually reorder operations in such a way that break
certain assumptions in a concurrent system; however when a thread does
atomicModifyIORef, no threads will observe operations on that
IORef that appeared above the atomicModifyIORef to happen after,
and vice versa.
...? If not, what's the corrected version of the above?
If your response is "don't use IORef in concurrent code; use TVar" please convince me with specific facts and concrete examples of the kind of things you can't reason about with IORef.
I don't know Haskell concurrency, but I know something about memory models.
Processors can reorder instructions the way they like: loads may go ahead of loads, loads may go ahead of stores, loads of dependent stuff may go ahead of loads of stuff they depend on (a[i] may load the value from array first, then the reference to array a!), stores may be reordered with each other. You simply cannot put a finger on it and say "these two things definitely appear in a particular order, because there is no way they can be reordered". But in order for concurrent algorithms to operate, they need to observe the state of other threads. This is where it is important for thread state to proceed in a particular order. This is achieved by placing barriers between instructions, which guarantee the order of instructions to appear the same to all processors.
Typically (one of the simplest models), you want two types of ordered instructions: ordered load that does not go ahead of any other ordered loads or stores, and ordered store that does not go ahead of any instructions at all, and a guarantee that all ordered instructions appear in the same order to all processors. This way you can reason about IRIW kind of problem:
Thread 1: x=1
Thread 2: y=1
Thread 3: r1=x;
r2=y;
Thread 4: r4=y;
r3=x;
If all of these operations are ordered loads and ordered stores, then you can conclude the outcome (1,0,0,1)=(r1,r2,r3,r4) is not possible. Indeed, ordered stores in Threads 1 and 2 should appear in some order to all threads, and r1=1,r2=0 is witness that y=1 is executed after x=1. In its turn, this means that Thread 4 can never observe r4=1 without observing r3=1 (which is executed after r4=1) (if the ordered stores happen to be executed that way, observing y==1 implies x==1).
Also, if the loads and stores were not ordered, the processors would usually be allowed to observe the assignments to appear even in different orders: one might see x=1 appear before y=1, the other might see y=1 appear before x=1, so any combination of values r1,r2,r3,r4 is permitted.
This is sufficiently implemented like so:
ordered load:
load x
load-load -- barriers stopping other loads to go ahead of preceding loads
load-store -- no one is allowed to go ahead of ordered load
ordered store:
load-store
store-store -- ordered store must appear after all stores
-- preceding it in program order - serialize all stores
-- (flush write buffers)
store x,v
store-load -- ordered loads must not go ahead of ordered store
-- preceding them in program order
Of these two, I can see IORef implements a ordered store (atomicWriteIORef), but I don't see a ordered load (atomicReadIORef), without which you cannot reason about IRIW problem above. This is not a problem, if your target platform is x86, because all loads will be executed in program order on that platform, and stores never go ahead of loads (in effect, all loads are ordered loads).
A atomic update (atomicModifyIORef) seems to me a implementation of a so-called CAS loop (compare-and-set loop, which does not stop until a value is atomically set to b, if its value is a). You can see the atomic modify operation as a fusion of a ordered load and ordered store, with all those barriers there, and executed atomically - no processor is allowed to insert a modification instruction between load and store of a CAS.
Furthermore, writeIORef is cheaper than atomicWriteIORef, so you want to use writeIORef as much as your inter-thread communication protocol permits. Whereas writeIORef x vx >> writeIORef y vy >> atomicWriteIORef z vz >> readIORef t does not guarantee the order in which writeIORefs appear to other threads with respect to each other, there is a guarantee that they both will appear before atomicWriteIORef - so, seeing z==vz, you can conclude at this moment x==vx and y==vy, and you can conclude IORef t was loaded after stores to x, y, z can be observed by other threads. This latter point requires readIORef to be a ordered load, which is not provided as far as I can tell, but it will work like a ordered load on x86.
Typically you don't use concrete values of x, y, z, when reasoning about the algorithm. Instead, some algorithm-dependent invariants about the assigned values must hold, and can be proven - for example, like in IRIW case you can guarantee that Thread 4 will never see (0,1)=(r3,r4), if Thread 3 sees (1,0)=(r1,r2), and Thread 3 can take advantage of this: this means something is mutually excluded without acquiring any mutex or lock.
An example (not in Haskell) that will not work if loads are not ordered, or ordered stores do not flush write buffers (the requirement to make written values visible before the ordered load executes).
Suppose, z will show either x until y is computed, or y, if x has been computed, too. Don't ask why, it is not very easy to see outside the context - it is a kind of a queue - just enjoy what sort of reasoning is possible.
Thread 1: x=1;
if (z==0) compareAndSet(z, 0, y == 0? x: y);
Thread 2: y=2;
if (x != 0) while ((tmp=z) != y && !compareAndSet(z, tmp, y));
So, two threads set x and y, then set z to x or y, depending on whether y or x were computed, too. Assuming initially all are 0. Translating into loads and stores:
Thread 1: store x,1
load z
if ==0 then
load y
if == 0 then load x -- if loaded y is still 0, load x into tmp
else load y -- otherwise, load y into tmp
CAS z, 0, tmp -- CAS whatever was loaded in the previous if-statement
-- the CAS may fail, but see explanation
Thread 2: store y,2
load x
if !=0 then
loop: load z -- into tmp
load y
if !=tmp then -- compare loaded y to tmp
CAS z, tmp, y -- attempt to CAS z: if it is still tmp, set to y
if ! then goto loop -- if CAS did not succeed, go to loop
If Thread 1 load z is not a ordered load, then it will be allowed to go ahead of a ordered store (store x). It means wherever z is loaded to (a register, cache line, stack,...), the value is such that existed before the value of x can be visible. Looking at that value is useless - you cannot then judge where Thread 2 is up to. For the same reason you've got to have a guarantee that the write buffers were flushed before load z executed - otherwise it will still appear as a load of a value that existed before Thread 2 could see the value of x. This is important as will become clear below.
If Thread 2 load x or load z are not ordered loads, they may go ahead of store y, and will observe the values that were written before y is visible to other threads.
However, see that if the loads and stores are ordered, then the threads can negotiate who is to set the value of z without contending z. For example, if Thread 2 observes x==0, there is guarantee that Thread 1 will definitely execute x=1 later, and will see z==0 after that - so Thread 2 can leave without attempting to set z.
If Thread 1 observes z==0, then it should try to set z to x or y. So, first it will check if y has been set already. If it wasn't set, it will be set in the future, so try to set to x - CAS may fail, but only if Thread 2 concurrently set z to y, so no need to retry. Similarly there is no need to retry if Thread 1 observed y has been set: if CAS fails, then it has been set by Thread 2 to y. Thus we can see Thread 1 sets z to x or y in accordance with the requirement, and does not contend z too much.
On the other hand, Thread 2 can check if x has been computed already. If not, then it will be Thread 1's job to set z. If Thread 1 has computed x, then need to set z to y. Here we do need a CAS loop, because a single CAS may fail, if Thread 1 is attempting to set z to x or y concurrently.
The important takeaway here is that if "unrelated" loads and stores are not serialized (including flushing write buffers), no such reasoning is possible. However, once loads and stores are ordered, both threads can figure out the path each of them _will_take_in_the_future, and that way eliminate contention in half the cases. Most of the time x and y will be computed at significantly different times, so if y is computed before x, it is likely Thread 2 will not touch z at all. (Typically, "touching z" also possibly means "wake up a thread waiting on a cond_var z", so it is not only a matter of loading something from memory)
within a thread an atomicModifyIORef "is never observed to take place
ahead of any earlier IORef operations, or after any later IORef
operations" i.e. we get a partial ordering of: stuff above the atomic
mod -> atomic mod -> stuff after. Although, the wording "is never
observed" here is suggestive of spooky behavior that I haven't
anticipated.
"is never observed" is standard language when discussing memory reordering issues. For example, a CPU may issue a speculative read of a memory location earlier than necessary, so long as the value doesn't change between when the read is executed (early) and when the read should have been executed (in program order). That's entirely up to the CPU and cache though, it's never exposed to the programmer (hence language like "is never observed").
A readIORef x might be moved before writeIORef y, at least when there
are no data dependencies
True
Logically I don't see how something like readIORef x >>= writeIORef y
could be reordered
Correct, as that sequence has a data dependency. The value to be written depends upon the value returned from the first read.
For the other questions: newIORef False >>= \v-> writeIORef v True >> readIORef v will always return True (there's no opportunity for other threads to access the ref here).
In the myprint example, there's very little you can do to ensure this works reliably in the face of new optimizations added to future GHCs and across various CPU architectures. If you write:
writeIORef myRef True
x <- readIORef myRef
yourVal <- x `seq` readIORef yourRef
Even though GHC 7.6.3 produces correct cmm (and presumably asm, although I didn't check), there's nothing to stop a CPU with a relaxed memory model from moving the readIORef yourRef to before all of the myref/seq stuff. The only 100% reliable way to prevent it is with a memory fence, which GHC doesn't provide. (Edward's blog post does go through some of the other things you can do now, as well as why you may not want to rely on them).
I think your mental model is correct, however it's important to know that the possible apparent reorderings introduced by concurrent ops can be really unintuitive.
Edit: at the cmm level, the code snippet above looks like this (simplified, pseudocode):
[StackPtr+offset] := True
x := [StackPtr+offset]
if (notEvaluated x) (evaluate x)
yourVal := [StackPtr+offset2]
So there are a couple things that can happen. GHC as it currently stands is unlikely to move the last line any earlier, but I think it could if doing so seemed more optimal. I'm more concerned that, if you compile via LLVM, the LLVM optimizer might replace the second line with the value that was just written, and then the third line might be constant-folded out of existence, which would make it more likely that the read could be moved earlier. And regardless of what GHC does, most CPU memory models allow the CPU itself to move the read earlier absent a memory barrier.
http://en.wikipedia.org/wiki/Memory_ordering for non atomic concurrent reads and writes. (basically when you dont use atomics, just look at the memory ordering model for your target CPU)
Currently ghc can be regarded as not reordering your reads and writes for non atomic (and imperative) loads and stores. However, GHC Haskell currently doesn't specify any sort of concurrent memory model, so those non atomic operations will have the ordering semantics of the underlying CPU model, as I link to above.
In other words, Currently GHC has no formal concurrency memory model, and because any optimization algorithms tend to be wrt some model of equivalence, theres no reordering currently in play there.
that is: the only semantic model you can have right now is "the way its implemented"
shoot me an email! I'm working on some patching up atomics for 7.10, lets try to cook up some semantics!
Edit: some folks who understand this problem better than me chimed in on ghc-users thread here http://www.haskell.org/pipermail/glasgow-haskell-users/2013-December/024473.html .
Assume that i'm wrong in both this comment and anything i said in the ghc-users thread :)
I'm somewhat baffled by the following behaviour of SBCL garbage collector in REPL. Define two functions:
(defun test-gc ()
(let ((x (make-array 50000000)))
(elt x 0)))
(defun add-one (x) (+ 1 x))
Then run
(add-one (test-gc))
I would expect that nothing references the original array anymore. Yet, as (room) reports, the memory is not freed. I would understand, if I ran (test-gc) directly, then some reference could have been stuck somewhere in SLIME or in
(list * ** ***)
But was is the case here? Thanks, Andrei.
Update Some time ago I filed a bug. It was recently confirmed. See:
https://bugs.launchpad.net/sbcl/+bug/936304
Just because nothing references the objects anymore doesn't mean that the memory will be reclaimed. The garbage collector will be run some time in the future, and often the only guarantee that you get is that it will be run before you get an out of memory error.
Another thing that may happen here is that you are looking at the Lisp process memory usage. When memory is CG'ed it is generally not returned to the operating system. Instead, the memory is simply marked as free on the heap, and can be used in future memory allocations.
SBCL only...
(gc :full t)
This will forcibly kick off a garbage collection across all generations. I noticed SBCL was holding onto a ton of memory a few days ago and used this to drop the memory down to the "true" usage.
I then wrote an ensure-gc macro to wrap my garbagey computation & experimentation stuff in. I'll paste it in when I get home if I remember... it's nothing fancy.