Browsing the source of readChan one finds the following implementation and comment, starting with version 4.6 of base:
-- |Read the next value from the 'Chan'.
readChan :: Chan a -> IO a
readChan (Chan readVar _) = do
modifyMVarMasked readVar $ \read_end -> do -- Note [modifyMVarMasked]
(ChItem val new_read_end) <- readMVar read_end
-- Use readMVar here, not takeMVar,
-- else dupChan doesn't work
return (new_read_end, val)
-- Note [modifyMVarMasked]
-- This prevents a theoretical deadlock if an asynchronous exception
-- happens during the readMVar while the MVar is empty. In that case
-- the read_end MVar will be left empty, and subsequent readers will
-- deadlock. Using modifyMVarMasked prevents this. The deadlock can
-- be reproduced, but only by expanding readMVar and inserting an
-- artificial yield between its takeMVar and putMVar operations.
Prior to base version 4.6, modifyMVar was used rather than modifyMVarMasked.
I don't understand what theoretical problem is solved for here. The last sentence states there is a problem if the thread yields between the takeMVar and putMVar that comprise readMVar. But as readMVar executes under mask_, how can an async exception prevent the put after successful take?
Any help understanding the issue here is appreciated.
Let's compare the source of modifyMVar and modifyMVarMasked, since the code changed from using one to using the other:
modifyMVar m io =
mask $ \restore -> do
a <- takeMVar m
(a',b) <- restore (io a) `onException` putMVar m a
putMVar m a'
return b
modifyMVarMasked m io =
mask_ $ do
a <- takeMVar m
(a',b) <- io a `onException` putMVar m a
putMVar m a'
return b
The key here is that modifyMVar calls restore before executing its second argument, whereas modifyMVarMasked does not. So readMVar was not called under mask_ in the old version of the code as you claim in your question! It was called under restore, instead, and therefore asynchronous exceptions could be enabled after all.
Here's me working through it.
So in readMVar...
readMVar :: MVar a -> IO a
readMVar m =
mask_ $ do
a <- takeMVar m
putMVar m a
return a
...despite the mask_ the runtime may raise an exception in a blocked takeMVar. Note in that function there's no need to actually handle that case; either the readMVar worked, in which case we're safe from async exceptions, or the takeMVar never succeeds; either way we never break the MVar by leaving it empty. (Is this correct? This is what I took away from the answer to my own related question.)
modifyMVar and modifyMVarMasked are:
modifyMVar :: MVar a -> (a -> IO (a,b)) -> IO b
modifyMVar m io =
mask $ \restore -> do
a <- takeMVar m
(a',b) <- restore (io a) `onException` putMVar m a
putMVar m a'
return b
modifyMVarMasked :: MVar a -> (a -> IO (a,b)) -> IO b
modifyMVarMasked m io =
mask_ $ do
a <- takeMVar m
(a',b) <- io a `onException` putMVar m a
putMVar m a'
return b
...where the difference is in modifyMVar the masking state is restored (i.e. async exceptions probably become unmasked) in io a, which in our case is more or less readMVar.
EDIT: Although readMVar is mask_-ed as well, so now I can't see why either choice of modifyMVarMasked or modifyMVar would make a difference...
The comment seems to imply that yield (inserted into readMVar) is interruptible (I can't find this documented anywhere) and so an async exception might be raised, in which case readVar would be restored (in both current and pre-4.6 versions), but in a non-empty queue readers would see an empty one and block.
You may be interested in reading the GHC trac on this commit, which has a sample program that consistently reproduces this bug when both Control.Concurrent.Chan and the test program are compiled -O0
https://ghc.haskell.org/trac/ghc/ticket/6153
In a similar vein:
https://ghc.haskell.org/trac/ghc/ticket/5870
Related
Given:
λ: >let x = Control.Concurrent.MVar.newMVar ""
λ: >:t x
x :: IO (MVar [Char])
I tried to call putMVar:
λ: >:t putMVar
putMVar :: MVar a -> a -> IO ()
λ: >:t x
x :: IO (MVar [Char])
yet it failed
λ: >x >>= \y -> putMVar y "foo"
:^?^?
*** Exception: thread blocked indefinitely in an MVar operation
Why did it fail and how can I update x with "foo" rather than ""?
x is not an MVar. It is an action that creates an MVar, i.e. it's another name for newMVar "".
x >>= \y -> putMVar y "foo" is an action that creates an MVar and names it y. It then tries to put "foo" in the MVar. However, y already contains "", so putMVar blocks. It doesn't simply block forever because y is a local variable in this action, meaning no one else has access to it and no readers exist. putMVar detects this situation (deadlock) and throws an exception instead.
What you should do instead is start with:
x <- newMVar ""
This makes x an MVar.
Then you can take the old value ("") out:
takeMVar x
And put a new value in
putMVar x "foo"
(MVar doesn't support replacing the existing value in one step; you first have to take it out, then put a new value in.)
Let's look up the documentation:
data MVar a
An MVar (pronounced "em-var") is a synchronising variable, used for
communication between concurrent threads. It can be thought of as a a
box, which may be empty or full.
and
newMVar :: a -> IO (MVar a)
Create an MVar which contains the supplied value.
and
putMVar :: MVar a -> a -> IO ()
Put a value into an MVar. If the MVar is currently full, putMVar will
wait until it becomes empty.
There are two further important properties of putMVar:
putMVar is single-wakeup. That is, if there are multiple threads
blocked in putMVar, and the MVar becomes empty, only one thread will
be woken up. The runtime guarantees that the woken thread completes
its putMVar operation. When multiple threads are blocked on an MVar,
they are woken up in FIFO order. This is useful for providing fairness
properties of abstractions built using MVars.
melpomene's answer contains the correct explanation. I let my answer remain here for the cited documentation.
I have this code
mvarToList :: MVar (Maybe a) -> IO [a]
mvarToList mv = do
mby <- takeMVar mv
case mby of
Nothing -> return []
Just x -> do
xs <- unsafeInterleaveIO (mvarToList mv)
return (x : xs)
{-# NOINLINE mvarToList #-}
streamQuery_ :: FromRow a => Connection -> Query -> IO [a]
streamQuery_ conn q = do
mv <- newEmptyMVar
void $ forkIO $ do
fold_ conn q () (\_ x -> putMVar mv (Just x))
putMVar mv Nothing
mvarToList mv
It uses the postgresql-simple library which provides a fold_ function. It allows you to provide a function that's ran on every row of the query's result. The internal implementation is such that if the provided function blocks, the next result won't be fetched (modulo batching).
What I've done here is basically attached the evaluation of an element of the list to takeMVar which gives me a lazy list that streams results from the DB.
The problem here is leakage. The only way I can hope to avoid any leaks is if SOMEHOW garbage collecting the list results in the RTS determining that the fetching thread is blocked forever (since nothing will be taken from the MVar anymore) and throws an exception to it. Maybe I could even catch that exception and handle cleaning up?
This ensures mutual exclusion of actions:
do
lock <- newMVar ()
let atomicPrint = withMVar lock . const . print
mapM_ (forkIO . atomicPrint) [['1'..'8'],['a'..'h']]
This doesn't:
do
lock <- newMVar ()
let atomicPrint x = bracket_ (takeMVar lock) (print x) (putMVar lock ())
mapM_ (forkIO . atomicPrint) [['1'..'8'],['a'..'h']]
Can you explain why? By the definitions of withMVar and bracket_ on Hackage and equational reasoning I came to the conclusion that the two pieces of code should do the same. But running in GHCi proves me wrong.
You have the order wrong; You are acquiring the lock and immediately releasing it; whereas the signature says:
bracket_
:: IO a -- computation to run first ("acquire resource")
-> IO b -- computation to run last ("release resource")
-> IO c -- computation to run in-between
-> IO c -- returns the value from the in-between computation
so it should be:
bracket_ (takeMVar lock) (putMVar lock ()) $ print x
I am trying to implement a stack for use in a concurrent application. I would like the following semantics: push should never block, and pop should block the calling thread on an empty stack, but still permit pushes. I implemented it as follows (irrelevant bits at the bottom):
data Stream a = Stream a (MVar (Stream a))
data Stack a = Stack (MVar (Int, MVar (Stream a)))
popStack :: Stack a -> IO a
popStack (Stack stack) = do
(sz, mvar) <- takeMVar stack
mbStream <- tryTakeMVar mvar
case mbStream of
Nothing -> putMVar stack (sz, mvar) >> popStack (Stack stack)
Just (Stream x xs) -> putMVar stack (sz-1, xs) >> return x
If the stream MVar is empty I have to release the lock on the stack and try again. However, this seems like a kludge: if a thread calls pop on an empty stack, it could loop several times before being suspended, even though the MVar will not become full while that thread is being executed. Is there a better way utilizing MVars to write pop with the desired semantics?
import Control.Concurrent.MVar
import Control.Monad
import Control.Concurrent
import Text.Printf
newStack :: IO (Stack a)
newStack = do
stream <- newEmptyMVar
Stack <$> newMVar (0, stream)
pushStack :: Stack a -> a -> IO ()
pushStack (Stack stack) val = do
(sz, stream) <- takeMVar stack
stream' <- newMVar (Stream val stream)
putMVar stack (sz+1, stream')
test = do
s <- newStack
_ <- forkIO $ mapM_ (\a -> printf "pushing %c... " a >> pushStack s a >> threadDelay 100000) ['a' .. 'z']
_ <- forkIO $ do
replicateM 13 (popStack s) >>= printf "\npopped 13 elems: %s\n"
replicateM 13 (popStack s) >>= printf "\npopped 13 elems: %s\n"
threadDelay (5*10^6)
putStrLn "Done"
It's not very exciting, but the simplest solution would be to use STM (if you're using cabal you'll need the stm package in your dependencies list).
import Control.Concurrent.STM
newtype Stack a = Stack (TVar [a])
new :: STM (Stack a)
new = fmap Stack $ newTVar []
put :: a -> Stack a -> STM ()
put a (Stack v) = modifyTVar' v (a:)
get :: Stack a -> STM a
get (Stack v) = do
stack <- readTVar v
case stack of
[] -> retry
(a:as) -> do writeTVar v as
return a
You get the blocking behavior you want with retry, which is implemented in such a way that threads won't be awoken until the TVar changes to something other than []. This is also nice because you can now use your stack in larger composed atomic transactions, and you don't have to worry about making sure exceptions don't break your structure.
If you're trying to do high-performance concurrency with lots of threads contending for reads and/or writes, you might find that this isn't clever enough. In that case you might have fun designing a structure based around the fetch-and-add-based counter from atomic-primops, or seeing what else is available on hackage.
A quick critique:
"push should never block" is not something you are going to actually achieve. Though you may have a perdonal definition of "block" that is different than the GHC meaning. For instance, your pushStack does block.
popStack on an empty stack goes into a very busy loop, repeatedly taking and putting the Stack MVar. You do not want to do this, you even say "pop should block".
You use takeMVar and putMVar instead of withMVar or modifyMVar. This means you are not thinking about exceptions, and the Stack will not be good in a general library.
So you have learned about MVars, and you are using them with them to learn more.
Here StackData is either a stack with data (Full) or without data (Empty). When Empty, there is an initally empty MVar for hungry poppers to wait upon.
type Lock = MVar ()
type Some a = (a, [a]) -- non empty version of list
data StackData a = Full !(Some a)
| Empty !Lock
data Stack a = Stack { stack :: MVar (StackData a) }
pop s = do
x <- modifyMVar (stack s) $ \ sd ->
case sd of
Empty lock -> do
return (Empty lock, Left lock)
Full (a, []) -> do
lock <- newEmptyMVar
return (Empty lock, Right a)
Full (a, (b:bs)) -> return (Full (b, bs), Right a)
case x of
Left lock -> do
withMVar lock return -- wait on next pusher
pop s
Right a -> return a
push s a = modifyMVar_ (stack s) $ \ sd ->
case sd of
Empty lock -> do
tryPutMVar lock () -- should succeed, releases waiting poppers
evaluate Full (a,[]) -- do not accumulate lazy thunks
Full (b, bs) -> do
xs <- evaluate (b:bs) -- do not accumulate lazy thunks
evaluate (Full (a, xs)) -- do not accumulate lazy thunks
Note : I have not tried to compile this.
EDIT: A safer version of push needs to only put () into the lock when it has succeeded in modifying the stack from Empty to Full. This certainty can be achieved with the 'mask' operation. The 'restore' is used inside 'modifyMVar' but is not required:
push s a = mask $ \restore -> do
mLock <- modifyMVar (stack s) $ \ sd -> restore $
case sd of
Empty lock -> do
n <- evaluate Full (a,[]) -- do not accumulate lazy thunks
return (n, Just lock)
Full (b, bs) -> do
xs <- evaluate (b:bs) -- do not accumulate lazy thunks
n <- evaluate (Full (a, xs))
return (n, Nothing)
whenJust mLock $ \ lock -> tryPutMVar lock ()
I'm trying to understand the MVar example in the GHC latest docs -
data SkipChan a = SkipChan (MVar (a, [MVar ()])) (MVar ())
newSkipChan :: IO (SkipChan a)
newSkipChan = do
sem <- newEmptyMVar
main <- newMVar (undefined, [sem])
return (SkipChan main sem)
putSkipChan :: SkipChan a -> a -> IO ()
putSkipChan (SkipChan main _) v = do
(_, sems) <- takeMVar main
putMVar main (v, [])
mapM_ (sem -> putMVar sem ()) sems
getSkipChan :: SkipChan a -> IO a
getSkipChan (SkipChan main sem) = do
takeMVar sem
(v, sems) <- takeMVar main
putMVar main (v, sem:sems)
return v
dupSkipChan :: SkipChan a -> IO (SkipChan a)
dupSkipChan (SkipChan main _) = do
sem <- newEmptyMVar
(v, sems) <- takeMVar main
putMVar main (v, sem:sems)
return (SkipChan main sem)
I understand most of the program but for two questions -
Are operations like putSkipChan atomic? It seems to avoid blocking on putMVar by first doing a takeMVar. But wouldn't that fail if something else calls putMVar after the takeMVar but before the putMVar? In such cases, it seems the program would block forever.
Why does dupSkipChan append sem to the list of semaphores in the SkipChan? Isn't that done by getSkipChan. It seems to me that calling dupSkipChan followed by getSkipChan (which seems to be what you have to do to have multiple readers) would cause a block when putSkipChan tries to wake up the same semaphore twice?
You are correct, another thread could call putMVar main and mess up putSkipChan. But the module creating the above code would not export the SkipChan constructor so such a rogue operation would be impossible.
dupSkipChan makes a new emptyMVar called sem and adds that to the list in main. It does not add the pre-existing one that was created in newSkipChan. Thus there is no block.
To explain more to other readers of this question and comment: The idea is that there may be multiple reader threads. Initially SkipChan main sem1 is the only such reader. dupSkipChan makes a SkipChan main sem2. If there are thousands of readers then you would not want to notify all of them of a new value in putSkipChan, thus the design is that getSkipChan puts its sem into the list in main. Initializing SkipChan as done in newSkipChan and dupSkipChan also includes putting the new empty sem into the list in main.
The above initialization and design means that the first getSkipChan obtains the most recent past value to have been written (or block for the first value to arrive). Future getSkipChan on that SkipChan will always get a newer value than any gotten before, and these will not block if that value is already available.