Order of execution within monads - haskell

I was learning how to use the State monad and I noticed some odd behavior in terms of the order of execution. Removing the distracting bits that involve using the actual state, say I have the following code:
import Control.Monad
import Control.Monad.State
import Debug.Trace
mainAction :: State Int ()
mainAction = do
traceM "Starting the main action"
forM [0..2] (\i -> do
traceM $ "i is " ++ show i
forM [0..2] (\j -> do
traceM $ "j is " ++ show j
someSubaction i j
)
)
Running runState mainAction 1 in ghci produces the following output:
j is 2
j is 1
j is 0
i is 2
j is 2
j is 1
j is 0
i is 1
j is 2
j is 1
j is 0
i is 0
Outside for loop
which seems like the reverse order of execution of what might be expected. I thought that maybe this is a quirk of forM and tried it with sequence which specifically states that it runs its computation sequentially from left to right like so:
mainAction :: State Int ()
mainAction = do
traceM "Outside for loop"
sequence $ map handleI [0..2]
return ()
where
handleI i = do
traceM $ "i is " ++ show i
sequence $ map (handleJ i) [0..2]
handleJ i j = do
traceM $ "j is " ++ show j
someSubaction i j
However, the sequence version produces the same output. What is the actual logic in terms of the order of execution that is happening here?

Haskell is lazy, which means things are not executed immediately. Things are executed whenever their result is needed – but no sooner. Sometimes code isn't executed at all if its result isn't needed.
If you stick a bunch of trace calls in a pure function, you will see this laziness happening. The first thing that is needed will be executed first, so that's the trace call you see first.
When something says "the computation is run from left to right" what it means is that the result will be the same as if the computation was run from left to right. What actually happens under the hood might be very different.
This is in fact why it's a bad idea to do I/O inside pure functions. As you have discovered, you get "weird" results because the execution order can be pretty much anything that produces the correct result.
Why is this a good idea? When the language doesn't enforce a specific execution order (such as the traditional "top to bottom" order seen in imperative languages) the compiler is free to do a tonne of optimisations, such as for example not executing some code at all because its result isn't needed.
I would recommend you to not think too much about execution order in Haskell. There should be no reason to. Leave that up to the compiler. Think instead about which values you want. Does the function give the correct value? Then it works, regardless of which order it executes things in.

I thought that maybe this is a quirk of forM and tried it with sequence which specifically states that it runs its computation sequentially from left to right like so: [...]
You need to learn to make the following, tricky distinction:
The order of evaluation
The order of effects (a.k.a. "actions")
What forM, sequence and similar functions promise is that the effects will be ordered from left to right. So for example, the following is guaranteed to print characters in the same order that they occur in the string:
putStrLn :: String -> IO ()
putStrLn str = forM_ str putChar >> putChar '\n'
But that doesn't mean that expressions are evaluated in this left-to-right order. The program has to evaluate enough of the expressions to figure out what the next action is, but that often does not require evaluating everything in every expression involved in earlier actions.
Your example uses the State monad, which bottoms out to pure code, so that accentuates the order issues. The only thing that a traversal functions such as forM promises in this case is that gets inside the actions mapped to the list elements will see the effect of puts for elements to their left in the list.

Related

Tail-recursive string splitting in Haskell

I'm considering the problem of splitting a string s at a character c.
This is expressed as
break (c ==) s
where the Haskell library definition of break (c ==) close enough to
br [] = ([],[])
br s#(h:t) = if (c == h)
then ([],s)
else let (h',t') = br t in (h:h',t')
(And let's suppose that I immediately wanted access to the second item of the return value, so that any lazy evaluation has been forced through.) The recursive call to br t appears to store h on the call stack, but a general sense of the algorithm indicates that this shouldn't be necessary. Here is one way of doing it in constant stack space, in a pseudocode language with mutability, where & denotes passage by reference, and lists are implemented as LISPy pairs:
br(c,s) =
allocate res_head,res_rest
iter(c,s,&res_head,&res_rest)
return (res_head,res_rest)
iter(c,s,&res_head,&res_rest) =
case s of
[] -> set res_head = res_rest = [] -- and terminate
c:ss -> set res_head = [], res_rest = s -- and terminate
x:ss -> allocate new_pair
set res_head = new_pair, new_pair.head = x
iter(c,ss,&new_pair.tail,&res_rest) -- tail call / jump
Whether or not GHC is smart enough to find this optimization, I'd like to formulate the computation in Haskell in a manner that is patently tail-recursive. How might one do this?
Tail recursive breakAt
The standard accumulator introduction trick would produce something like this:
breakAt :: Char -> String -> (String, String)
breakAt needle = breakAtAcc []
where breakAtAcc :: String -> String -> (String, String)
breakAtAcc seen [] = (reverse seen, [])
breakAtAcc seen cs#(c:cs')
| c == needle
= (reverse seen, cs)
| otherwise
= breakAtAcc (c : seen) cs'
The recursive part of this is tail recursive, although we process the characters that make up the pre-split part of the return value in the wrong order for building up a list, so they need to be reversed at the end. However even ignoring that (using a version without the reverse), this is probably worse.
In Haskell you're worrying about the wrong thing if you're concerned about the stack overflow errors you would see from deep recursion in many other languages (often prevented by tail call optimisation, hence tail recursion being desirable). Haskell does not have this kind of stack overflow. Haskell does have a stack, which can overflow, but it's not the normal call stack from imperative languages.
For example, if I start GHCi with ghci +RTS -K65k to explicitly set the maximum stack size to 65 KB (about the smallest value I could get it to start up with), then tripping the standard foldr (+) stack overflow doesn't take much:
λ foldr (+) 0 [1..3000]
*** Exception: stack overflow
A mere 3,000 recursive steps kills it. But I can run your br on much larger lists without problem:
λ let (pre, post) = br 'b' (replicate 100000000 'a' ++ "b") in (length pre, length post)
(100000000,1)
it :: (Int, Int)
That's 100 million non-tail recursive steps. If each of those took a stack frame and they were fitting in out 65 KB stack, we'd be getting about 1500 stack frames for every byte. Clearly this kind of recursion does not actually cause the stack consumption problems it does in other languages! That's because it's not the recursion depth itself that's causing the stack overflow in foldr (+) 0 [1..3000]. (See the last section at the end if you want to know what does cause it)
The advantage br has over a tail-recursive version like breakAt is that it's productive. If you're only interested in the first n characters of the prefix, then at most n characters of the input string will be examined (if you're interested in the post-split string, then obviously it will need to examine enough of the string to find the split). You can observe this by running br and breakAt on a long input string and taking small bit of prefix, something like this:
λ let (pre, post) = br 'b' (replicate 100000000 'a' ++ "b") in take 5 pre
"aaaaa"
it :: [Char]
If you try the same thing with breakAt (even if you take out the call to reverse), it'll at first only print " and then spend a long time thinking before eventually coming up with the rest of "aaaaa". That's because it has to find the split point before it returns anything except another recursive call; the first character of the prefix is not available until the split point has been reached. And that's the essence of tail recursion; there's no way to fix it.
You can see it even more definitively by using undefined:
λ let (pre, post) = br 'b' ("12345" ++ undefined) in take 5 pre
"12345"
it :: [Char]
λ let (pre, post) = breakAtRev 'b' ("12345" ++ undefined) in take 5 pre
"*** Exception: Prelude.undefined
CallStack (from HasCallStack):
error, called at libraries/base/GHC/Err.hs:74:14 in base:GHC.Err
undefined, called at <interactive>:18:46 in interactive:Ghci8
br can return the first 5 characters without examining whether or not there is a 6th. breakAt (with or without reverse) forces more of the input, and so hits the undefined.
This is a common pattern in Haskell. Changing an algorithm to make it tail recursive frequently makes performance worse. You do want tail recursion if the final return value is a small type like an Int, Double, etc that can't be consumed in a "gradual" way; but you need to make sure any accumulator parameter you're using is strictly evaluated in that case! That's why for summing a list foldl' is better than foldr; there's no way to consume the sum "gradually" so we want tail recursion like foldl, but it has to be the strict variant foldl' or we still get stack overflows even though it's tail recursive! But when you're returning something like a list or a tree, it's much better if you can arrange for consuming the result gradually to cause the input to be read gradually. Tail recursion fundamentally does not allow this.
What causes stack consumption in Haskell?
Haskell is lazy. So when you call a recursive function it doesn't necessarily run all the way to the "bottom" of the recursion immediately as it would in a strict language (requiring the stack frames from every level of the recursion to be "live" at once, if they can't be optimised away by something like tail call elimination). It doesn't necessarily run at all of course, only when the result is demanded, but even then "demand" causes the function to run only as far as "weak head normal form". That has a fancy technical definition, but it more-or-less means the function will run until it has produced a data constructor.
So if the function's code itself returns a data constructor, as br does (all of its cases return the pair constructor (,)), then entering the function will be complete at the end of that one single step. The data constructor's fields may contain thunks for further recursive calls (as they do in br), but those recursive calls will only be actually run when something pattern matches on this constructor to extract those fields, and then pattern matches on them. Often that is just about to happen, because the pattern match on the returned constructor is what caused the demand to run this function in the first place, but it is still resolved after this function returns. And so any recursive calls in the constructor's fields don't have to be made while the first call is "still running", and thus we don't have to keep a call stack frame around for it when we enter the recursive calls. (I'm sure the actual GHC implementation does lots of fancy tricks I'm not covering, so this picture probably isn't correct in detail, but it's an accurate enough mental model for how the language "works")
But what if the code for the function doesn't return a data constructor directly? What if instead it returns another function call? Function calls aren't run until their return value is demanded, but the function we're considering was only run because its return value was demanded. That means the function call it returns is also demanded, so we have to enter it.
We can use tail call elimination to avoid needing a call stack frame for this too. But what if the code for this function makes a pattern match (or uses seq, or strictness analysis decided demand its arguments early, etc etc)? If the thing it's matching on is already evaluated to a data constructor then that's fine, it can run the pattern match now. But if the thing that's matching is itself a thunk, that means we have to enter some random other function and run it far enough to produce its outermost data constructor. Now we need a stack frame to remember where to come back to when that other function is done.
So stack consumption happens in Haskell not directly from call depth, but from "pattern match depth" or "demand depth"; the depth of thunks we have to enter without finding the outermost data constructor.
So br is totally fine for this sort of stack consumption; all of its branches immediately return a pair constructor (,). The recursive case has thunks for another call to br in its fields, but as we've seen that does not cause stack growth.
In the case of breakAt (or rather breakAtAcc), the return value in the recursive case is another function call we have to enter. We only get to a point where we can stop (a data constructor) after running all the way to the split point. So we lose laziness and productivity, but it still won't cause a stack overflow because of tail call elimination.
The problem with foldr (+) 0 [1..3000] is it returns 0 + <thunk>. That's not a data constructor, it's a function call to +, so it has to be entered. But + is strict in both arguments so before it returns it's going to pattern match on the thunk, requiring us to run it (and thus add a stack frame). That thunk will evaluate foldr (+) 1 [2..3000] to 1 + <thunk>, and entering + again will force that thunk to 2 + thunk, and so on, eventually exhausting the stack. But the call depth of foldr technically does no harm, rather it's the nested + thunks that foldr generates that consume the stack. If you could write a similar giant chain of additions literally (and GHC evaluated that naively without rewriting anything), the same stack overflow would happen with no call depth at all. And if you use foldr with a different function you can process infinite lists to an unbounded depth with no stack consumption at all.
TLDR
You can have a tail recursive break, but it's worse than the version in base. Deep recursion is a problem for strict languages that use a call stack, but not for Haskell. Deep pattern matching is the analogous problem, but it takes more than counting recursion depth to spot that. So trying to make all your recursion be tail recursion in Haskell will frequently make your code worse.

How to check if a function was called before?

I have a function that takes some time to process some input.
I want to make it so that if a user calls that function it would start doing what it does, but if the user calls it again and if it's still working on the thing, it'll return a message telling user to sit tight.
runService :: Arg1 -> Arg2 -> IO String
runService arg1 arg2 = do
isRunning <- {- Check if the function is running -}
isDone <- {- Check if the function is done working -}
if isRunning
then return "Work is in progress!"
else if isDone
then return "Work is done."
else do startService arg1 arg2
return "Work has been started."
I believe I'll need to modify startService function as well but I'm not sure how.
This is its current type signature.
startService :: Arg1 -> Arg2 -> IO ()
It would be even more useful if runService could provide a "progress bar" of sorts.
runService :: Arg1 -> Arg2 -> IO String
runService arg1 arg2 = do
isRunning <- {- Check if the function is running -}
isDone <- {- Check if the function is done working -}
if isRunning
then {- Return some progress indicator -}
else if isDone
then return "Work is done."
else do startService arg1 arg2
return "Work has been started."
{- Return some progress indicator -}
It is fairly simple for startService to print its status using putStrLn but I'm not certain how I'd supply these status strings to runService or how I'd thread this status upwards all the way to main.
For a procedural language, this would call for a global variable. Looking for something similar in Haskell led me to StateT, ReaderT, WriterT monad transformers but I am having a hard time understanding them and their usage in general and in this perticular context.
The type of design approach you propose looks heavily influenced by imperative programming background. Be careful with that, because although you can do imperative programming in Haskell, and mimic some common imperative patterns and behavior, going this way would be full of frustration, with no real benefit.
So you want to take a functional look at it :
you want to observe the state of a computation from an other computation, this means that you are going to need multiple threads for concurrency
you want to pass messages between threads, so you will need some primitives for message passing. Many options exists for that.
Basically, what you can do is share a concurrency primitive (think of a mutable box reference) between your computing thread and your observing thread. This is enough to implement common behavior, like synchronization (locking observing thread until computation finish and pass result) or waiting (is the computation done ? yes / no).
Observing progress is another matter involving cooperation from the computing function. If you can divide work in steps, you should probably use a continuation passing style so that you can keep your function pure. This function would do one step of a computation, then return a couple (estimated progress ration, next step).
An other function, this one in IO, would just keep calling the computation function and update the shared state with progress ratio, until work is done.
Inside the shared box, use an Algebraic Data Type :
data ComputationState result = Working Float -- progress ratio
| Done result -- done, get the result
Here you go.

Two function calls but only one trace displayed

With GHC version 8.0.2 the following program:
import Debug.Trace
f=trace("f was called")$(+1)
main = do
print $ f 1
print $ f 2
outputs:
f was called
2
3
Is it the expected behaviour? If yes, why? I expected the string f was called to be printed twice, one before 2 and one before 3.
Same result on TIO: Try it online!
EDIT
But this program:
import Debug.Trace
f n=trace("f was called:"++show n)$n+1
main = do
print $ f 1
print $ f 2
outputs:
f was called:1
2
f was called:2
3
Try it online!
I suspect those behaviours have something to do with laziness, but my questions remain: is this the expected behaviour and, if yes, why?
Hackage asserts this:
The trace function outputs the trace message given as its first
argument, before returning the second argument as its result.
I don't see it in the first example.
EDIT 2 Third example based on #amalloy comments:
import Debug.Trace
f n=trace "f was called"$n+1
main = do
print $ f 1
print $ f 2
outputs:
f was called
2
f was called
3
Your trace prints when defining f, not when calling it. If you want the trace to happen as part of the call, you should make sure it is not evaluated until a parameter is received:
f x = trace "f was called" $ x + 1
Also, when I run your TIO I don't see the trace appearing at all. trace is not really a reliable way to print things, because it cheats the IO model that the language is built on. The most subtle changes in evaluation order can disturb it. Of course for debugging you can use it, but as even this simple example demonstrates it is not guaranteed to help much.
In your edit, you quote the documentation of trace:
The trace function outputs the trace message given as its first
argument, before returning the second argument as its result.
And indeed this is exactly what happens in your program! When defining f,
trace "f was called" $ (+ 1)
needs to be evaluated. First, "f was called" is printed. Then, trace evaluates to, and returns, (+ 1). This is the final value of the trace expression, and therefore (+ 1) is what f is defined as. The trace has vanished, see?
It is indeed a result of laziness.
Laziness means that merely defining a value doesn't mean it will be evaluated; that will only happen if it's needed for something. If it's not needed, the code that would actually produce it doesn't "do anything". If a particular value is needed the code is run, but only the first time it would be needed; if there are other references to the same value and it is used again, those uses will just directly use the value that was produced the first time.
You have to remember that functions are values in every sense of the term; everything that applies to ordinary values also applies to functions. So your definition of f is simply writing an expression for a value, the expression's evaluation will be deferred until the value of f is actually needed, and as it's needed twice the value (function) the expression computes will be saved and reused the second time.
Lets look at it in more detail:
f=trace("f was called")$(+1)
You're defining a value f with a simple equation (not using any syntactic sugar for writing arguments on the left hand side of the equation, or providing cases via multiple equations). So we can simply take the right hand side as a single expression that defines the value f. Just defining it does nothing, it sits there until you call:
print $ f 1
Now print needs its argument evaluated, so this is forcing the expression f 1. But we can't apply f to 1 without first forcing f. So we need to figure out what function the expression trace "f was called" $ (+1) evaluates to. So trace is actually called, does its unsafe IO printing and f was called appears at the terminal, and then trace returns its second argument: (+1).
So now we know what function f is: (+1). f will now be a direct reference to that function, with no need to evaluate the original code trace("f was called")$(+1) if f is called again. Which is why the second print does nothing.
This case is quite different, even though it might look similar:
f n=trace("f was called:"++show n)$n+1
Here we are using the syntactic sugar for defining functions by writing arguments on the left hand side. Let's desugar that to lambda notation to see more clearly what the actual value being bound to f is:
f = \n -> trace ("f was called:" ++ show n) $ n + 1
Here we've written a function value directly, rather than an expression that can be evaluated to result in a function. So when f needs to be evaluated before it can be called on 1, the value of f is that whole function; the trace call is inside the function instead of being the thing that is called to result in a function. So trace isn't called as part of evaluating f, it's called as part of evaluating the application f 1. If you saved the result of that (say by doing let x = f 1) and then printed it multiple times, you'd only see the one trace. But the when we come to evaluate f 2, the trace call is still there inside the function that is the value of f, so when f is called again so is trace.

How to make a Haskell program display a preliminary result in response to user input?

I am writing a program in Haskell which repeatedly takes its most recent result and uses this to compute the next result. I want to be able to see the newest result in response to user input, so I tried something like this:
main = mainhelper 0
mainhelper count = do
count <- return (count + 1)
line <- getLine
if null line
then do mainhelper count
else do
putStrLn $ show count
return ()
I was hoping that getLine would return an empty line if the user hasn't entered anything, but this doesn't happen, instead the program does nothing until it receives user input. Is there a way around this?
One simple solution is to fork a thread for the complicated computation and communicate with the main UI thread via MVar. For example:
import Control.Exception
import Control.Monad
import Control.Concurrent
thinkReallyHard x = do
threadDelay 1000000 -- as a proxy for something that's actually difficult
evaluate (x+1)
main = do
v <- newMVar 0
forkIO (forever (modifyMVar_ v thinkReallyHard))
forever (getLine >> readMVar v >>= print)
You may wonder about the role of evaluate in thinkReallyHard. The subtlety there is that MVars are lazy -- they can contain thunks just as easily as computed values. In particular, this means it's easy to accidentally push all the pure computation from the forked thread into the thread that's reading and using the contents of the MVar. The call to evaluate simply forces the forked thread to finish the pure computation before writing to the MVar.
It does return an empty line if you hit enter without entering text -- you just immediately prompt for more input, so it might look like nothing is happening. But if you run the program, hit enter three times, then enter something non-empty, you'll see that the final count reflects the multiple entries.
Here's a modified version of your code that does the same thing, but is slightly more canonical:
main = mainhelper 0
mainhelper count = do
let count' = count + 1
line <- getLine
if null line
then mainhelper count'
else print count'
Rather than count <- return (count + 1), you can write let count' = count + 1 -- this is a pure binding, not something that needs to invoke the IO monad (as you're doing with <- and return). But I used count' instead of count because otherwise that will create a recursive binding. The '-suffixing is a standard idiom for a "modified version" of an identifier.
Next I switched putStrLn . show to print, which is part of the Prelude and does exactly that.
I got rid of the return () because print (and putStrLn) already have the type IO (). This allows you to elide the do statements, as there's now a single IO expression in each branch of the if.
It's not really clear what you're trying to do here that's different from what you are doing -- the code does (in imperative terms) increment a counter every time the user presses enter, and displays the state of the counter every time the user enters some non-empty text.
Here's another version that prints the counter every time, but only increments it when prompted, which may or may not be helpful to you:
main = mainhelper 0
mainhelper count = do
print count
line <- getLine
mainhelper (if null line then count else succ count)
I'm using succ, the successor function, instead of the explicit + 1, which is just a style preference.

Using Netwire to spawn lists from a value

I think I'm fundamentally misunderstanding how to attack this type of problem with Netwire:
I have the following test-case:
I'd like to take a string, split it into lines, print each line, then exit.
The pieces I'm missing are:
How to inhibit after a value early in a pipeline, but then if that value is split and the results produced later, not inhibit until all those results are consumed.
What the main loop function should look like
If I should use 'once'
Here is the code I have so far:
import Control.Wire
main :: IO ()
main = recur mainWire
recur :: Wire () IO () () -> IO ()
recur a = do
(e,w) <- stepWire a 0 ()
case e of Left () -> return ()
Right () -> recur w
mainWire :: Wire () IO () ()
mainWire = pure "asdf\nqwer\nzxcv"
>>> once
>>> arr lines
>>> fifo
>>> arr print
>>> perform
This outputs the following:
"asdf"
and then quits. If I remove the once, then the program performs as expected, repeatedly outputting the full list of lines forever.
I'd like the following output:
"asdf"
"qwer"
"zxcv"
I'm sure that I'm just missing some intuition here about the correct way to approach this class of problem with Netwire.
Note: This is for an older version of netwire (before events worked like they do now), so some translating of the code would be required to make this work properly with the current version.
If I understood you right, you want a wire that produces the lines of a string, and then inhibits when it's done with that? It's a bit hard to tell.
once as the name implies, produces exactly once and then inhibits forever. Again it's a bit unclear what your wires are doing (because you didn't tell us) but it's not something you normally put into your "main" wire (so far I've only ever used once with andThen).
If that is correct, I'd probably do it something along the lines of:
produceLines s = produceLines' $ lines s where
produceLines' [] = inhibit mempty
produceLines' (l:ls) = pure s . once --> produceLines' ls
(You could write that as a fold or something, I just thought this was a bit clearer).
--> is pretty for andThen in case you didn't know. Basically this splits the passed string into lines, and turns them into a wire that produces the first line once, and then behaves like a similar wire except with the first element removed. It inhibits indefinitely once all values were produced.
Is that what you wanted?
Update
I see what you were trying to do now.
The wire you were trying to write could be done as
perform . arr print . fifo . ((arr lines . pure "asdf\nqwer\nzxcv" . once) --> pure [])
The part in the parentheses produce ["adf","nqwer","nzxc"] for one instant, and then produces [] forever. fifo takes values from the previous wire, adding the result from the previous wire in every instance (because of that we have to keep producing []). The rest is as you know it (I'm using the function-like notation rather than the arrow notation because I prefer it, but that shouldn't be a problem for you).

Resources