I've read "Tackling the Awkward Squad" paper by SPJ and most of it was quite easy to follow however I didn't completely understand what exactly those two conditions above the separation line mean:
In paper it is stated that they are here to ensure that the second context (E2) is maximal, i.e. it includes all the active catches. However I do not completely understand what it means. Does it mean that an exception won't be thrown if there is a catch inside of a second thread? But why is bind also there then?
Intuitively, it is used to "insert" the exception ioError e in the right place, in a deterministic way.
Consider M = catch (threadDelay 1000000) someHandler. We have both:
M = Ea[M]
where Ea[x] = x
M = Eb[M']
where Eb[x] = catch x someHandler
M' = threadDelay 1000000
without the side condition, we would have two distinct operational steps, making the semantics non-deterministic:
{throwTo t e}s | {M}t ==> {return ()}s | {Ea[ioError e]}t
= {return ()}s | {ioError e}t
{throwTo t e}s | {M}t ==> {return ()}s | {Eb[ioError e]}t
= {return ()}s | {catch (ioError e) someHandler}t
In the former case, the error is not caught, in the latter it is. The side condition ensures only the latter is a valid step.
Bind is there as well to avoid replacing everything in:
M = catch (threadDelay 1000000) someHandler >>= something
Here, if you only require "M not a catch", you could again choose M = Ea[M] and replace all the code. The side condition instead forces you to choose
Ec[x] = catch x someHandler >>= something
and insert the ioError e in the correct place inside the catch.
Related
I'm currently working my way through Learn You a Haskell for Great Good, and I'm trying to modify one of the code snippets in chapter nine, "Input and Output" to handle errors correctly:
main = do
(command:args) <- getArgs
let result = lookup command dispatch
if result == Nothing
then
errorExit
else
let (Just action) = result
action args
where
dispatch :: [(String, [String] -> IO ())]
is an association list
and
errorExit :: IO ()
is some function that prints an error message.
Compiling this with GHC gives the error message
todo.hs:20:13: parse error in let binding: missing required 'in'
which (to my understanding), seems to be saying that the "let" here doesn't realise it's in a "do" block.
Adding "do" on lines five and seven (after "then" and "else" respectively), changes the error message to
todo.hs:20:13:
The last statement in a 'do' block must be an expression
let (Just action) = result
todo.hs:21:5: Not in scope: `action'.
and now, whilst I agree with the first error message, I also have that one of my variables has jumped out of scope? I've double checked my alignment, and nothing seems to be out of place.
What is the appropriate way to assign a varaible within an if clause that is within a do block?
My suggestion is to not use if in the first place, use case. By using case you get to test the value and bind the result to a variable all in one go. Like this:
main = do
(command:args) <- getArgs
case lookup command dispatch of
Nothing -> errorExit
Just action -> action args
For a more in-depth discussion on why we should prefer case over if see boolean blindness.
#svenningsson suggested the right fix. The reason your original fails is because let clauses can only appear at the top level of a do block - they're simple syntactic sugar that doesn't look into inner expressions:
do let x = 1
y
desugars to the let expression
let x = 1 in y
Alas, in a do block, an expression clause like if ... then ... else ... has no way to declare variables in the rest of the do block at all.
There are at least two possible ways to get around this.
Absorb the remainder of the do block into the expression:
main = do
(command:args) <- getArgs
let result = lookup command dispatch
if result == Nothing
then
errorExit
else do
let (Just action) = result
action args
(This is essentially the method #svenningsson uses in his better case version too.)
This can however get a bit awkward if the remainder of the do expression needs to be duplicated into more than one branch.
("Secret" trick: GHC (unlike standard Haskell) doesn't actually require a final, inner do block to be indented more than the outer one, which can help if the amount of indentation starts getting annoying.)
Pull the variable declaration outside the expression:
main = do
(command:args) <- getArgs
let result = lookup command dispatch
action <- if result == Nothing
then
errorExit
else do
let (Just action') = result
return action'
action args
Here that requires making up a new variable name, since the pattern in the let clause isn't just a simple variable.
Finally, action was always out of scope in the last line of your code, but GHC works in several stages, and if it aborts in the parsing stage, it won't check for scope errors. (For some reason it does the The last statement in a 'do' block must be an expression check at a later stage than parsing.)
Addendum: After I understood what #Sibi meant, I see that result == Nothing isn't going to work, so you cannot use if ... then ... else ... with that even with the above workarounds.
You are getting an error because you are trying to compare values of function type. When you perform the check if result == Nothing, it tries to check the equality of Nothing with the value of result which is a type of Maybe ([String] -> IO ()).
So, if you want it to properly typecheck, you have to define Eq instances for -> and that wouldn't make any sense as you are trying to compare two functions for equality.
You can also use fmap to write your code:
main = do
(command:args) <- getArgs
let result = lookup command dispatch
print $ fmap (const args) result
Suppose I have a function that should calculate some value in one case and
throw an exception otherwise. I would like to use QuickCheck to ensure my
function behaves correctly, however is not obvious how to perform this sort
of check. Is it possible and if yes, how to check that exception of certain
type is thrown and it contains correct information about its cause?
Indeed ioProperty is the key to this sort of test. You will need to use it in combination with catch or try. Here I show the latter:
prop_exceptional :: Int -> Property
prop_exceptional n = ioProperty $ do
result <- try . evaluate $ myDangerousFunction n
return $ r === result
where r | n == 0 = Left MyException
| otherwise = Right 42
Quite obviously, myDangerousFunction should throw MyException whenever it gets 0 and return 42 otherwise. Note the useful function evaluate, which you need to use to evaluate pure function in IO context to catch exceptions produced there.
I was learning how to use the State monad and I noticed some odd behavior in terms of the order of execution. Removing the distracting bits that involve using the actual state, say I have the following code:
import Control.Monad
import Control.Monad.State
import Debug.Trace
mainAction :: State Int ()
mainAction = do
traceM "Starting the main action"
forM [0..2] (\i -> do
traceM $ "i is " ++ show i
forM [0..2] (\j -> do
traceM $ "j is " ++ show j
someSubaction i j
)
)
Running runState mainAction 1 in ghci produces the following output:
j is 2
j is 1
j is 0
i is 2
j is 2
j is 1
j is 0
i is 1
j is 2
j is 1
j is 0
i is 0
Outside for loop
which seems like the reverse order of execution of what might be expected. I thought that maybe this is a quirk of forM and tried it with sequence which specifically states that it runs its computation sequentially from left to right like so:
mainAction :: State Int ()
mainAction = do
traceM "Outside for loop"
sequence $ map handleI [0..2]
return ()
where
handleI i = do
traceM $ "i is " ++ show i
sequence $ map (handleJ i) [0..2]
handleJ i j = do
traceM $ "j is " ++ show j
someSubaction i j
However, the sequence version produces the same output. What is the actual logic in terms of the order of execution that is happening here?
Haskell is lazy, which means things are not executed immediately. Things are executed whenever their result is needed – but no sooner. Sometimes code isn't executed at all if its result isn't needed.
If you stick a bunch of trace calls in a pure function, you will see this laziness happening. The first thing that is needed will be executed first, so that's the trace call you see first.
When something says "the computation is run from left to right" what it means is that the result will be the same as if the computation was run from left to right. What actually happens under the hood might be very different.
This is in fact why it's a bad idea to do I/O inside pure functions. As you have discovered, you get "weird" results because the execution order can be pretty much anything that produces the correct result.
Why is this a good idea? When the language doesn't enforce a specific execution order (such as the traditional "top to bottom" order seen in imperative languages) the compiler is free to do a tonne of optimisations, such as for example not executing some code at all because its result isn't needed.
I would recommend you to not think too much about execution order in Haskell. There should be no reason to. Leave that up to the compiler. Think instead about which values you want. Does the function give the correct value? Then it works, regardless of which order it executes things in.
I thought that maybe this is a quirk of forM and tried it with sequence which specifically states that it runs its computation sequentially from left to right like so: [...]
You need to learn to make the following, tricky distinction:
The order of evaluation
The order of effects (a.k.a. "actions")
What forM, sequence and similar functions promise is that the effects will be ordered from left to right. So for example, the following is guaranteed to print characters in the same order that they occur in the string:
putStrLn :: String -> IO ()
putStrLn str = forM_ str putChar >> putChar '\n'
But that doesn't mean that expressions are evaluated in this left-to-right order. The program has to evaluate enough of the expressions to figure out what the next action is, but that often does not require evaluating everything in every expression involved in earlier actions.
Your example uses the State monad, which bottoms out to pure code, so that accentuates the order issues. The only thing that a traversal functions such as forM promises in this case is that gets inside the actions mapped to the list elements will see the effect of puts for elements to their left in the list.
I'm currently reading Implementing functional languages: a tutorial by SPJ and the (sub)chapter I'll be referring to in this question is 3.8.7 (page 136).
The first remark there is that a reader following the tutorial has not yet implemented C scheme compilation (that is, of expressions appearing in non-strict contexts) of ECase expressions.
The solution proposed is to transform a Core program so that ECase expressions simply never appear in non-strict contexts. Specifically, each such occurrence creates a new supercombinator with exactly one variable which body corresponds to the original ECase expression, and the occurrence itself is replaced with a call to that supercombinator.
Below I present a (slightly modified) example of such transformation from 1
t a b = Pack{2,1} ;
f x = Pack{2,2} (case t x 7 6 of
<1> -> 1;
<2> -> 2) Pack{1,0} ;
main = f 3
== transformed into ==>
t a b = Pack{2,1} ;
f x = Pack{2,2} ($Case1 (t x 7 6)) Pack{1,0} ;
$Case1 x = case x of
<1> -> 1;
<2> -> 2 ;
main = f 3
I implemented this solution and it works like charm, that is, the output is Pack{2,2} 2 Pack{1,0}.
However, what I don't understand is - why all that trouble? I hope it's not just me, but the first thought I had of solving the problem was to just implement compilation of ECase expressions in C scheme. And I did it by mimicking the rule for compilation in E scheme (page 134 in 1 but I present that rule here for completeness): so I used
E[[case e of alts]] p = E[[e]] p ++ [Casejump D[[alts]] p]
and wrote
C[[case e of alts]] p = C[[e]] p ++ [Eval] ++ [Casejump D[[alts]] p]
I added [Eval] because Casejump needs an argument on top of the stack in weak head normal form (WHNF) and C scheme doesn't guarantee that, as opposed to E scheme.
But then the output changes to enigmatic: Pack{2,2} 2 6.
The same applies when I use the same rule as for E scheme, i.e.
C[[case e of alts]] p = E[[e]] p ++ [Casejump D[[alts]] p]
So I guess that my "obvious" solution is inherently wrong - and I can see that from outputs. But I'm having trouble stating formal arguments as to why that approach was bound to fail.
Can someone provide me with such argument/proof or some intuition as to why the naive approach doesn't work?
The purpose of the C scheme is to not perform any computation, but just delay everything until an EVAL happens (which it might or might not). What are you doing in your proposed code generation for case? You're calling EVAL! And the whole purpose of C is to not call EVAL on anything, so you've now evaluated something prematurely.
The only way you could generate code directly for case in the C scheme would be to add some new instruction to perform the case analysis once it's evaluated.
But we (Thomas Johnsson and I) decided it was simpler to just lift out such expressions. The exact historical details are lost in time though. :)
Background
In response to a question, I built and uploaded a bounded-tchan (wouldn't have been right for me to upload jnb's version). If the name isn't enough, a bounded-tchan (BTChan) is an STM channel that has a maximum capacity (writes block if the channel is at capacity).
Recently, I've received a request to add a dup feature like in the regular TChan's. And thus begins the problem.
How the BTChan looks
A simplified (and actually non-functional) view of BTChan is below.
data BTChan a = BTChan
{ max :: Int
, count :: TVar Int
, channel :: TVar [(Int, a)]
, nrDups :: TVar Int
}
Every time you write to the channel you include the number of dups (nrDups) in the tuple - this is an 'individual element counter' which indicates how many readers have gotten this element.
Every reader will decrement the counter for the element it reads then move it's read-pointer to then next element in the list. If the reader decrements the counter to zero then the value of count is decremented to properly reflect available capacity on the channel.
To be clear on the desired semantics: A channel capacity indicates the maximum number of elements queued in the channel. Any given element is queued until a reader of each dup has received the element. No elements should remain queued for a GCed dup (this is the main problem).
For example, let there be three dups of a channel (c1, c2, c3) with capacity of 2, where 2 items were written into the channel then all items were read out of c1 and c2. The channel is still full (0 remaining capacity) because c3 hasn't consumed its copies. At any point in time if all references toc3 are dropped (so c3 is GCed) then the capacity should be freed (restored to 2 in this case).
Here's the issue: let's say I have the following code
c <- newBTChan 1
_ <- dupBTChan c -- This represents what would probably be a pathological bug or terminated reader
writeBTChan c "hello"
_ <- readBTChan c
Causing the BTChan to look like:
BTChan 1 (TVar 0) (TVar []) (TVar 1) --> -- newBTChan
BTChan 1 (TVar 0) (TVar []) (TVar 2) --> -- dupBTChan
BTChan 1 (TVar 1) (TVar [(2, "hello")]) (TVar 2) --> -- readBTChan c
BTChan 1 (TVar 1) (TVar [(1, "hello")]) (TVar 2) -- OH NO!
Notice at the end the read count for "hello" is still 1? That means the message is not considered gone (even though it will get GCed in the real implementation) and our count will never decrement. Because the channel is at capacity (1 element maximum) the writers will always block.
I want a finalizer created each time dupBTChan is called. When a dupped (or original) channel is collected all elements remaining to be read on that channel will get the per-element count decremented, also the nrDups variable will be decremented. As a result, future writes will have the correct count (a count that doesn't reserve space for variables not-read by GCed channels).
Solution 1 - Manual Resource Management (what I want to avoid)
JNB's bounded-tchan actually has manual resource management for this reason. See the cancelBTChan. I'm going for something harder for the user to get wrong (not that manual management isn't the right way to go in many cases).
Solution 2 - Use exceptions by blocking on TVars (GHC can't do this how I want)
EDIT this solution, and solution 3 which is just a spin-off, does not work! Due to bug 5055 (WONTFIX) the GHC compiler sends exceptions to both blocked threads, even though one is sufficient (which is theoretically determinable, but not practical with the GHC GC).
If all the ways to get a BTChan are IO, we can forkIO a thread that reads/retries on an extra (dummy) TVar field unique to the given BTChan. The new thread will catch an exception when all other references to the TVar are dropped, so it will know when to decrement the nrDups and individual element counters. This should work but forces all my users to use IO to get their BTChans:
data BTChan = BTChan { ... as before ..., dummyTV :: TVar () }
dupBTChan :: BTChan a -> IO (BTChan a)
dupBTChan c = do
... as before ...
d <- newTVarIO ()
let chan = BTChan ... d
forkIO $ watchChan chan
return chan
watchBTChan :: BTChan a -> IO ()
watchBTChan b = do
catch (atomically (readTVar (dummyTV b) >> retry)) $ \e -> do
case fromException e of
BlockedIndefinitelyOnSTM -> atomically $ do -- the BTChan must have gotten collected
ls <- readTVar (channel b)
writeTVar (channel b) (map (\(a,b) -> (a-1,b)) ls)
readTVar (nrDup b) >>= writeTVar (nrDup b) . (-1)
_ -> watchBTChan b
EDIT: Yes, this is a poor mans finalizer and I don't have any particular reason to avoid using addFinalizer. That would be the same solution, still forcing use of IO afaict.
Solution 3: A cleaner API than solution 2, but GHC still doesn't support it
Users start a manager thread by calling initBTChanCollector, which will monitor a set of these dummy TVars (from solution 2) and do the needed clean-up. Basically, it shoves the IO into another thread that knows what to do via a global (unsafePerformIOed) TVar. Things work basically like solution 2, but the creation of BTChan's can still be STM. Failure to run initBTChanCollector would result in an ever-growing list (space leak) of tasks as the process runs.
Solution 4: Never allow discarding BTChans
This is akin to ignoring the problem. If the user never drops a dupped BTChan then the issue disappears.
Solution 5
I see ezyang's answer (totally valid and appreciated), but really would like to keep the current API just with a 'dup' function.
** Solution 6**
Please tell me there's a better option.
EDIT:
I implemented solution 3 (totally untested alpha release) and handled the potential space leak by making the global itself a BTChan - that chan should probably have a capacity of 1 so forgetting to run init shows up really quick, but that's a minor change. This works in GHCi (7.0.3) but that seems to be incidental. GHC throws exceptions to both blocked threads (the valid one reading the BTChan and the watching thread) so my if you are blocked reading a BTChan when another thread discards it's reference then you die.
Here is another solution: require all accesses to the the bounded channel duplicate to be bracketed by a function that releases its resources on exit (by an exception or normally). You can use a monad with a rank-2 runner to prevent duplicated channels from leaking out. It's still manual, but the type system makes it a lot harder to do naughty things.
You really don't want to rely on true IO finalizers, because GHC gives no guarantees about when a finalizer may be run: for all you know it may wait until the end of the program before running the finalizer, which means you're deadlocked until then.