Repeating an action without control structures - haskell

I'm dipping my toe into the Haskell pool, and I'm starting to get the hang of it. For the most part, the lack of traditional control structures doesn't bother me too much. (I'm coming from a C/C++ background.) But I'm a little confused about how you'd repeat an action. For example, if you have a turn-based game, in an imperative language, you might do something like this:
while (not somePlayerWon())
{
getNextMove();
updateGameState();
}
It's not clear to me how you'd do this in Haskell. You could do something recursive like:
playARound gameState = do
nextMove <- getNextMove gameState
newGameState <- updateGameState gameState nextMove
if (not somePlayerWon newGameState)
playARound newGameState
else gameOver -- I realize this probably has to return something
But if you do that, don't you run the risk of a stack overflow? Or will the compiler take the tail-recursive definition and convert it into the equivalent of a for loop? If so, is this the accepted way of doing this sort of thing?

In Haskell we try not to do use explicit recursion. Recursion is a really big hammer, and for most problems, higher order functions provide a slightly more controlled solution. You're code is perfectly fine, it's tail recursive, but it's often easier to read the combinator based approach
For loops in monads the monad-loops package is nice. You're example would be written as
whileM_ (getState >>= somePlayerWon) $ do
state <- getState
move <- getNextMove
putState $ getNewState state move
Where getState and putState behave like get and put from the State monad.
Or if you're avoiding a monad and just passing state manually
until somePlayerWon
(\gameState -> nextGameState gameState (getNextMove gameState))
gameState
or
flip (until somePlayerWon) gameState $ \gameState ->
nextGameState gameState $ getNextMove gameState
See Avoid Explicit Recursion for more on why explicit recursion should be treated with cation.

You are correct, if the function is tail-recursive, it will be transformed into a loop by the compiler. And this way of writing the main loop is indeed how people usually do it.
As a further reading, you may find interesting a couple of short posts on games in functional languages by James Hague (he uses Erlang for illustrations, but the ideas are general), and the description of the Component-Entity-State approach to game programming by Chris Granger (illustrated in Clojure).

Related

What (if any) is the generally accepted way of minimising the impact of the IO Monad on my code

I'm struggling a bit with the IO Monad. (still very much a 101 learner)
I believe I understand the excellent reasons for segregating "IO" from purely functional code, but this appears to be making my code much more complex when using clock and environment attributes. Here's an example (related to clocks):
timeZoneSeconds = liftA (60*) $ liftA timeZoneMinutes getCurrentTimeZone
Now, I have lots of other stuff to do with timeZoneSeconds -- adding, subtracting, comparing -- elsewhere in the program, and as timeZoneSeconds interacts with other bits, practically everything I'm dealing with turns into an "IO ", and thus fills my code with liftAs.
So basically I'm seeing all my pure code turning into IO-dirty code.
In all the didactic material I've seen, most of the explanations around the IO monad are of the general sort "read stuff then write stuff", without much "calculate stuff".
Is there a recommended way to minimise the impact of this?
Should I redefine all the operators I need to use liftA "under the covers"?
Or should I just get on with it?
Think of it as dependency injection. You inject the results of the impure calls into your pure code, then use the results of the pure code to do more impure IO such as printing the result:
main = do
env <- lookupEnv "ENV"
tz <- getCurrentTimeZone
let result = pureCode env tz
putStr result
Your pureCode function doesn't have any IO attached.

Is the monadic IO construct in Haskell just a convention?

Regarding Haskell's monadic IO construct:
Is it just a convention or is there is a implementation reason for it?
Could you not just FFI into libc.so instead to do your I/O, and skip the whole IO-monad thing?
Would it work anyway, or is the outcome nondeterministic because of:
(a) Haskell's lazy evaluation?
(b) another reason, like that the GHC is pattern-matching for the IO monad and then handling it in a special way (or something else)?
What is the real reason - in the end you end up using a side effect, so why not do it the simple way?
Yes, monadic I/O is a consequence of Haskell being lazy. Specifically, though, monadic I/O is a consequence of Haskell being pure, which is effectively necessary for a lazy language to be predictable.†
This is easy to illustrate with an example. Imagine for a moment that Haskell were not pure, but it was still lazy. Instead of putStrLn having the type String -> IO (), it would simply have the type String -> (), and it would print a string to stdout as a side-effect. The trouble with this is that this would only happen when putStrLn is actually called, and in a lazy language, functions are only called when their results are needed.
Here’s the trouble: putStrLn produces (). Looking at a value of type () is useless, because () means “boring”. That means that this program would do what you expect:
main :: ()
main =
case putStr "Hello, " of
() -> putStrLn " world!"
-- prints “Hello, world!\n”
But I think you can agree that programming style is pretty odd. The case ... of is necessary, however, because it forces the evaluation of the call to putStr by matching against (). If you tweak the program slightly:
main :: ()
main =
case putStr "Hello, " of
_ -> putStrLn " world!"
…now it only prints world!\n, and the first call isn’t evaluated at all.
This actually gets even worse, though, because it becomes even harder to predict as soon as you start trying to do any actual programming. Consider this program:
printAndAdd :: String -> Integer -> Integer -> Integer
printAndAdd msg x y = putStrLn msg `seq` (x + y)
main :: ()
main =
let x = printAndAdd "first" 1 2
y = printAndAdd "second" 3 4
in (y + x) `seq` ()
Does this program print out first\nsecond\n or second\nfirst\n? Without knowing the order in which (+) evaluates its arguments, we don’t know. And in Haskell, evaluation order isn’t even always well-defined, so it’s entirely possible that the order in which the two effects are executed is actually completely impossible to determine!
This problem doesn’t arise in strict languages with a well-defined evaluation order, but in a lazy language like Haskell, we need some additional structure to ensure side-effects are (a) actually evaluated and (b) executed in the correct order. Monads happen to be an interface that elegantly provide the necessary structure to enforce that order.
Why is that? And how is that even possible? Well, the monadic interface provides a notion of data dependency in the signature for >>=, which enforces a well-defined evaluation order. Haskell’s implementation of IO is “magic”, in the sense that it’s implemented in the runtime, but the choice of the monadic interface is far from arbitrary. It seems to be a fairly good way to encode the notion of sequential actions in a pure language, and it makes it possible for Haskell to be lazy and referentially transparent without sacrificing predictable sequencing of effects.
It’s worth noting that monads are not the only way to encode side-effects in a pure way—in fact, historically, they’re not even the only way Haskell handled side-effects. Don’t be misled into thinking that monads are only for I/O (they’re not), only useful in a lazy language (they’re plenty useful to maintain purity even in a strict language), only useful in a pure language (many things are useful monads that aren’t just for enforcing purity), or that you needs monads to do I/O (you don’t). They do seem to have worked out pretty well in Haskell for those purposes, though.
† Regarding this, Simon Peyton Jones once noted that “Laziness keeps you honest” with respect to purity.
Could you just FFI into libc.so instead to do IO and skip the IO Monad thing?
Taking from https://en.wikibooks.org/wiki/Haskell/FFI#Impure_C_Functions, if you declare an FFI function as pure (so, with no reference to IO), then
GHC sees no point in calculating twice the result of a pure function
which means the the result of the function call is effectively cached. For example, a program where a foreign impure pseudo-random number generator is declared to return a CUInt
{-# LANGUAGE ForeignFunctionInterface #-}
import Foreign
import Foreign.C.Types
foreign import ccall unsafe "stdlib.h rand"
c_rand :: CUInt
main = putStrLn (show c_rand) >> putStrLn (show c_rand)
returns the same thing every call, at least on my compiler/system:
16807
16807
If we change the declaration to return a IO CUInt
{-# LANGUAGE ForeignFunctionInterface #-}
import Foreign
import Foreign.C.Types
foreign import ccall unsafe "stdlib.h rand"
c_rand :: IO CUInt
main = c_rand >>= putStrLn . show >> c_rand >>= putStrLn . show
then this results in (probably) a different number returned each call, since the compiler knows it's impure:
16807
282475249
So you're back to having to use IO for the calls to the standard libraries anyway.
Let's say using FFI we defined a function
c_write :: String -> ()
which lies about its purity, in that whenever its result is forced it prints the string. So that we don't run into the caching problems in Michal's answer, we can define these functions to take an extra () argument.
c_write :: String -> () -> ()
c_rand :: () -> CUInt
On an implementation level this will work as long as CSE is not too aggressive (which it is not in GHC because that can lead to unexpected memory leaks, it turns out). Now that we have things defined this way, there are many awkward usage questions that Alexis points out—but we can solve them using a monad:
newtype IO a = IO { runIO :: () -> a }
instance Monad IO where
return = IO . const
m >>= f = IO $ \() -> let x = runIO m () in x `seq` f x
rand :: IO CUInt
rand = IO c_rand
Basically, we just stuff all of Alexis's awkward usage questions into a monad, and as long as we use the monadic interface, everything stays predictable. In this sense IO is just a convention—because we can implement it in Haskell there is nothing fundamental about it.
That's from the operational vantage point.
On the other hand, Haskell's semantics in the report are specified using denotational semantics alone. And, in my opinion, the fact that Haskell has a precise denotational semantics is one of the most beautiful and useful qualities of the language, allowing me a precise framework to think about abstractions and thus manage complexity with precision. And while the usual abstract IO monad has no accepted denotational semantics (to the lament of some of us), it is at least conceivable that we could create a denotational model for it, thus preserving some of the benefits of Haskell's denotational model. However, the form of I/O we have just given is completely incompatible with Haskell's denotational semantics.
Simply put, there are only supposed to be two distinguishable values (modulo fatal error messages) of type (): () and ⊥. If we treat FFI as the fundamentals of I/O and use the IO monad only "as a convention", then we effectively add a jillion values to every type—to continue having a denotational semantics, every value must be adjoined with the possibility of performing I/O prior to its evaluation, and with the extra complexity this introduces, we essentially lose all our ability to consider any two distinct programs equivalent except in the most trivial cases—that is, we lose our ability to refactor.
Of course, because of unsafePerformIO this is already technically the case, and advanced Haskell programmers do need to think about the operational semantics as well. But most of the time, including when working with I/O, we can forget about all that and refactor with confidence, precisely because we have learned that when we use unsafePerformIO, we must be very careful to ensure it plays nicely, that it still affords us as much denotational reasoning as possible. If a function has unsafePerformIO, I automatically give it 5 or 10 times more attention than regular functions, because I need to understand the valid patterns of use (usually the type signature tells me everything I need to know), I need to think about caching and race conditions, I need to think about how deep I need to force its results, etc. It's awful[1]. The same care would be necessary of FFI I/O.
In conclusion: yes it's a convention, but if you don't follow it then we can't have nice things.
[1] Well actually I think it's pretty fun, but it's surely not practical to think about all those complexities all the time.
That depends on what the meaning of "is" is—or at least what the meaning of "convention" is.
If a "convention" means "the way things are usually done" or "an agreement among parties covering a particular matter" then it is easy to give a boring answer: yes, the IO monad is a convention. It is the way the designers of the language agreed to handle IO operations and the way that users of the language usually perform IO operations.
If we are allowed to choose a more interesting definition of "convention" then we can get a more interesting answer. If a "convention" is a discipline imposed on a language by its users in order to achieve a particular goal without assistance from the language itself, then the answer is no: the IO monad is the opposite of a convention. It is a discipline enforced by the language that assists its users in constructing and reasoning about programs.
The purpose of the IO type is to create a clear distinction between the types of "pure" values and the types of values which require execution by the runtime system to generate a meaningful result. The Haskell type system enforces this strict separation, preventing a user from (say) creating a value of type Int which launches the proverbial missiles. This is not a convention in the second sense: its entire goal is to move the discipline required to perform side effects in a safe and consistent way from the user and onto the language and its compiler.
Could you just FFI into libc.so instead to do IO and skip the IO Monad thing?
It is, of course, possible to do IO without an IO monad: see almost every other extant programming language.
Would it work anyway or is the outcome undeterministic because of Haskell evaluating lazy or something else, like that the GHC is pattern matching for IO Monad and then handling it in a special way or something else.
There is no such thing as a free lunch. If Haskell allowed any value to require execution involving IO then it would have to lose other things that we value. The most important of these is probably referential transparency: if myInt could sometimes be 1 and sometimes be 5 depending on external factors then we would lose most of our ability to reason about our programs in a rigorous way (known as equational reasoning).
Laziness was mentioned in other answers, but the issue with laziness would specifically be that sharing would no longer be safe. If x in let x = someExpensiveComputationOf y in x * x was not referentially transparent, GHC would not be able to share the work and would have to compute it twice.
What is the real reason?
Without the strict separation of effectful values from non-effectful values provided by IO and enforced by the compiler, Haskell would effectively cease to be Haskell. There are plenty of languages that don't enforce this discipline. It would be nice to have at least one around that does.
In the end you end you endup in a sideeffect. So why not do it the simple way?
Yes, in the end your program is represented by a value called main with an IO type. But the question isn't where you end up, it's where you start: If you start by being able to differentiate between effectful and non-effectful values in a rigorous way then you gain a lot of advantages when constructing that program.
What is the real reason - in the end you end up using a side effect, so why not do it the simple way?
...you mean like Standard ML? Well, there's a price to pay - instead of being able to write:
any :: (a -> Bool) -> [a] -> Bool
any p = or . map p
you would have to type out this:
any :: (a -> Bool) -> [a] -> Bool
any p [] = False
any p (y:ys) = y || any p ys
Could you not just FFI into libc.so instead to do your I/O, and skip the whole IO-monad thing?
Let's rephrase the question:
Could you not just do I/O like Standard ML, and skip the whole IO-monad thing?
...because that's effectively what you would be trying to do. Why "trying"?
SML is strict, and relies on sytactic ordering to specify the order of evaluation everywhere;
Haskell is non-strict, and relies on data dependencies to specify the order of evaluation for certain expressions e.g. I/O actions.
So:
Would it work anyway, or is the outcome nondeterministic because of:
(a) Haskell's lazy evaluation?
(a) - the combination of non-strict semantics and visible effects is generally useless. For an amusing exhibition of just how useless this combination can be, watch this presentation by Erik Meiyer (the slides can be found here).

Monads in Haskell and Purity

My question is whether monads in Haskell actually maintain Haskell's purity, and if so how. Frequently I have read about how side effects are impure but that side effects are needed for useful programs (e.g. I/O). In the next sentence it is stated that Haskell's solution to this is monads. Then monads are explained to some degree or another, but not really how they solve the side-effect problem.
I have seen this and this, and my interpretation of the answers is actually one that came to me in my own readings -- the "actions" of the IO monad are not the I/O themselves but objects that, when executed, perform I/O. But it occurs to me that one could make the same argument for any code or perhaps any compiled executable. Couldn't you say that a C++ program only produces side effects when the compiled code is executed? That all of C++ is inside the IO monad and so C++ is pure? I doubt this is true, but I honestly don't know in what way it is not. In fact, didn't Moggi (sp?) initially use monads to model the denotational semantics of imperative programs?
Some background: I am a fan of Haskell and functional programming and I hope to learn more about both as my studies continue. I understand the benefits of referential transparency, for example. The motivation for this question is that I am a grad student and I will be giving 2 1-hour presentations to a programming languages class, one covering Haskell in particular and the other covering functional programming in general. I suspect that the majority of the class is not familiar with functional programming, maybe having seen a bit of scheme. I hope to be able to (reasonably) clearly explain how monads solve the purity problem without going into category theory and the theoretical underpinnings of monads, which I wouldn't have time to cover and anyway I don't fully understand myself -- certainly not well enough to present.
I wonder if "purity" in this context is not really well-defined?
It's hard to argue conclusively in either direction because "pure" is not particularly well-defined. Certainly, something makes Haskell fundamentally different from other languages, and it's deeply related to managing side-effects and the IO type¹, but it's not clear exactly what that something is. Given a concrete definition to refer to we could just check if it applies, but this isn't easy: such definitions will tend to either not match everyone's expectations or be too broad to be useful.
So what makes Haskell special, then? In my view, it's the separation between evaluation and execution.
The base language—closely related to the λ-caluclus—is all about the former. You work with expressions that evaluate to other expressions, 1 + 1 to 2. No side-effects here, not because they were suppressed or removed but simply because they don't make sense in the first place. They're not part of the model² any more than, say, backtracking search is part of the model of Java (as opposed to Prolog).
If we just stuck to this base language with no added facilities for IO, I think it would be fairly uncontroversial to call it "pure". It would still be useful as, perhaps, a replacement for Mathematica. You would write your program as an expression and then get the result of evaluating the expression at the REPL. Nothing more than a fancy calculator, and nobody accuses the expression language you use in a calculator of being impure³!
But, of course, this is too limiting. We want to use our language to read files and serve web pages and draw pictures and control robots and interact with the user. So the question, then, is how to preserve everything we like about evaluating expressions while extending our language to do everything we want.
The answer we've come up with? IO. A special type of expression that our calculator-like language can evaluate which corresponds to doing some effectful actions. Crucially, evaluation still works just as before, even for things in IO. The effects get executed in the order specified by the resulting IO value, not based on how it was evaluated. IO is what we use to introduce and manage effects into our otherwise-pure expression language.
I think that's enough to make describing Haskell as "pure" meaningful.
footnotes
¹ Note how I said IO and not monads in general: the concept of a monad is immensely useful for dozens of things unrelated to input and output, and the IO types has to be more than just a monad to be useful. I feel the two are linked too closely in common discourse.
² This is why unsafePerformIO is so, well, unsafe: it breaks the core abstraction of the language. This is the same as, say, putzing with specific registers in C: it can both cause weird behavior and stop your code from being portable because it goes below C's level of abstraction.
³ Well, mostly, as long as we ignore things like generating random numbers.
A function with type, for example, a -> IO b always returns an identical IO action when given the same input; it is pure in that it cannot possibly inspect the environment, and obeys all the usual rules for pure functions. This means that, among other things, the compiler can apply all of its usual optimization rules to functions with an IO in their type, because it knows they are still pure functions.
Now, the IO action returned may, when run, look at the environment, read files, modify global state, whatever, all bets are off once you run an action. But you don't necessarily have to run an action; you can put five of them into a list and then run them in reverse of the order in which you created them, or never run some of them at all, if you want; you couldn't do this if IO actions implicitly ran themselves when you created them.
Consider this silly program:
main :: IO ()
main = do
inputs <- take 5 . lines <$> getContents
let [line1,line2,line3,line4,line5] = map print inputs
line3
line1
line2
line5
If you run this, and then enter 5 lines, you will see them printed back to you but in a different order, and with one omitted, even though our haskell program runs map print over them in the order they were received. You couldn't do this with C's printf, because it immediately performs its IO when called; haskell's version just returns an IO action, which you can still manipulate as a first-class value and do whatever you want with.
I see two main differences here:
1) In haskell, you can do things that are not in the IO monad. Why is this good? Because if you have a function definitelyDoesntLaunchNukes :: Int -> IO Int you don't know that the resulting IO action doesn't launch nukes, it might for all you know. cantLaunchNukes :: Int -> Int will definitely not launch any nukes (barring any ugly hacks that you should avoid in nearly all circumstances).
2) In haskell, it's not just a cute analogy: IO actions are first class values. You can put them in lists, and leave them there for as long as you want, they won't do anything unless they somehow become part of the main action. The closest that C has to that are function pointers, which are quite a bit more cumbersome to use. In C++ (and most modern imperative languages really) you have closures which technically could be used for this purpose, but rarely are - mainly because Haskell is pure and they aren't.
Why does that distinction matter here? Well, where are you going to get your other IO actions/closures from? Probably, functions/methods of some description. Which, in an impure language, can themselves have side effects, rendering the attempt of isolating them in these languages pointless.
fiction-mode: Active
It was quite a challenge, and I think a wormhole could be forming in the neighbour's backyard, but I managed to grab part of a Haskell I/O implementation from an alternate reality:
class Kleisli k where
infixr 1 >=>
simple :: (a -> b) -> (a -> k b)
(>=>) :: (a -> k b) -> (b -> k c) -> a -> k c
instance Kleisli IO where
simple = primSimpleIO
(>=>) = primPipeIO
primitive primSimpleIO :: (a -> b) -> (a -> IO b)
primitive primPipeIO :: (a -> IO b) -> (b -> IO c) -> a -> IO c
Back in our slightly-mutilated reality (sorry!), I have used this other form of Haskell I/O to define our form of Haskell I/O:
instance Monad IO where
return x = simple (const x) ()
m >>= k = (const m >=> k) ()
and it works!
fiction-mode: Offline
My question is whether monads in Haskell actually maintain Haskell's purity, and if so how.
The monadic interface, by itself, doesn't maintain restrain the effects - it is only an interface, albeit a jolly-versatile one. As my little work of fiction shows, there are other possible interfaces for the job - it's just a matter of how convenient they are to use in practice.
For an implementation of Haskell I/O, what keeps the effects under control is that all the pertinent entities, be they:
IO, simple, (>=>) etc
or:
IO, return, (>>=) etc
are abstract - how the implementation defines those is kept private.
Otherwise, you would be able to devise "novelties" like this:
what_the_heck =
do spare_world <- getWorld -- how easy was that?
launchMissiles -- let's mess everything up,
putWorld spare_world -- and bring it all back :-D
what_the_heck -- that was fun; let's do it again!
(Aren't you glad our reality isn't quite so pliable? ;-)
This observation extends to types like ST (encapsulated state) and STM (concurrency) and their stewards (runST, atomically etc). For types like lists, Maybe and Either, their orthodox definitions in Haskell means no visible effects.
So when you see an interface - monadic, applicative, etc - for certain abstract types, any effects (if they exist) are contained by keeping its implementation private; safe from being used in aberrant ways.

Program design in Haskell: how to do simulation without mutability

I have a question about the best way to design a program I'm working on in Haskell. I'm writing a physics simulator, which is something I've done a bunch in standard imperative languages, and usually the main method looks something like:
while True:
simulationState = stepForward(simulationState)
render(simulationState)
And I'm wondering how to do something similar in Haskell. I have a function step :: SimState -> SimState and a function display :: SimState -> IO () that uses HOpenGL to draw a simulation state, but I'm at a loss as to how to do this in a "loop" of sorts, as all of the solutions I can come up with involve some sort of mutability. I'm a bit of a noob when it comes to Haskell, so it's entirely possible that I'm missing a very obvious design decision. Also, if there's a better way to architect my program as a whole, I'd be glad to hear it.
Thanks in advance!
In my opinion, the right way to think about this problem is not as a loop, but as a list or other such infinite streaming structure. I gave a similar answer to a similar question; the basic idea is, as C. A. McCann wrote, to use iterate stepForward initialState, where iterate :: (a -> a) -> a -> [a] “returns an infinite list of repeated applications of [stepForward] to [initialState]”.
The problem with this approach is that you have trouble dealing with a monadic step, and in particular a monadic rendering function. One approach would just be to take the desired chunk of the list in advance (possibly with a function like takeWhile, possibly with manual recursion) and then mapM_ render on that. A better approach would be to use a different, intrinsically monadic, streaming structure. The four that I can think of are:
The iteratee package, which was originally designed for streaming IO. I think here, your steps would be a source (enumerator) and your rendering would be a sink (iteratee); you could then use a pipe (an enumeratee) to apply functions and/or do filtering in the middle.
The enumerator package, based on the same ideas; one might be cleaner than the other.
The newer pipes package, which bills itself as “iteratees done right”—it's newer, but the semantics are, at least to me, significantly clearer, as are the names (Producer, Consumer, and Pipe).
The List package, in particular its ListT monad transformer. This monad transformer is designed to allow you to create lists of monadic values with more useful structure than [m a]; for instance, working with infinite monadic lists becomes more manageable. The package also generalizes many functions on lists into a new type class. It provides an iterateM function twice; the first time in incredible generality, and the second time specialized to ListT. You can then use functions such as takeWhileM to do your filtering.
The big advantage to reifying your program’s iteration in some data structure, rather than simply using recursion, is that your program can then do useful things with control flow. Nothing too grandiose, of course, but for instance, it separates the “how to terminate” decision from the “how to generate” process. Now, the user (even if it's just you) can separately decide when to stop: after n steps? After the state satisfies a certain predicate? There's no reason to bog down your generating code with these decisions, as it's logically a separate concern.
Well, if drawing successive states is all you want to do, that's pretty simple. First, take your step function and the initial state and use the iterate function. iterate step initialState is then an (infinite) list of each simulation state. You can then map display over that to get IO actions to draw each state, so together you'd have something like this:
allStates :: [SimState]
allStates = iterate step initialState
displayedStates :: [IO ()]
displayedStates = fmap display allStates
The simplest way to run it would be to then use the intersperse function to put a "delay" action between each display action, then use the sequence_ function to run the whole thing:
main :: IO ()
main = sequence_ $ intersperse (delay 20) displayedStates
Of course that means you have to forcibly terminate the application and precludes any sort of interactivity, so it's not really a good way to do it in general.
A more sensible approach would be to interleave things like "seeing if the application should exit" at each step. You can do that with explicit recursion:
runLoop :: SimState -> IO ()
runLoop st = do display st
isDone <- checkInput
if isDone then return ()
else delay 20 >> runLoop (step st)
My preferred approach is to write non-recursive steps instead and then use a more abstract loop combinator. Unfortunately there's not really good support for doing it that way in the standard libraries, but it would look something like this:
runStep :: SimState -> IO SimState
runStep st = do display st
delay 20
return (step st)
runLoop :: SimState -> IO ()
runLoop initialState = iterUntilM_ checkInput runStep initialState
Implementing the iterUntilM_ function is left as an exercise for the reader, heh.
Your approach is ok, you just need to remember that loops are expressed as recursion in Haskell:
simulation state = do
let newState = stepForward state
render newState
simulation newState
(But you definietly need a criterion how to end the loop.)

A way to avoid a common use of unsafePerformIO

I often find this pattern in Haskell code:
options :: MVar OptionRecord
options = unsafePerformIO $ newEmptyMVar
...
doSomething :: Foo -> Bar
doSomething = unsafePerformIO $ do
opt <- readMVar options
doSomething' where ...
Basically, one has a record of options or something similar, that is initially set at the program's beginning. As the programmer is lazy, he doesn't want to carry the options record all over the program. He defines an MVar to keep it - defined by an ugly use of unsafePerformIO. The programmer ensures, that the state is set only once and before any operation has taken place. Now each part of the program has to use unsafePerformIO again, just to extract the options.
In my opinion, such a variable is considered pragmatically pure (don't beat me). Is there a library that abstracts this concept away and ensures that the variable is set only once, i.e. that no call is done before that initialization and that one doesn't have to write unsafeFireZeMissilesAndMakeYourCodeUglyAnd DisgustingBecauseOfThisLongFunctionName
Those who would trade essential referential transparency for a little
temporary convenience deserve neither
purity nor convenience.
This is a bad idea. The code that you're finding this in is bad code.*
There's no way to fully wrap this pattern up safely, because it is not a safe pattern. Do not do this in your code. Do not look for a safe way to do this. There is not a safe way to do this. Put the unsafePerformIO down on the floor, slowly, and back away from the console...
*There are legitimate reasons that people do use top level MVars, but those reasons have to do with bindings to foreign code for the most part, or a few other things where the alternative is very messy. In those instances, as far as I know, however, the top level MVars are not accessed from behind unsafePerformIO.
If you are using MVar for holding settings or something similar, why don't you try reader monad?
foo :: ReaderT OptionRecord IO ()
foo = do
options <- ask
fireMissiles
main = runReaderT foo (OptionRecord "foo")
(And regular Reader if you don't require IO :P)
Use implicit parameters. They're slightly less heavyweight than making every function have Reader or ReaderT in its type. You do have to change the type signatures of your functions, but I think such a change can be scripted. (Would make a nice feature for a Haskell IDE.)
There is an important reason for not using this pattern. As far as I know, in
options :: MVar OptionRecord
options = unsafePerformIO $ newEmptyMVar
Haskell gives no guarantees that options will be evaluated only once. Since the result of option is a pure value, it can be memoized and reused, but it can also be recomputed for every call (i.e. inlined) and the meaning of the program must not change (contrary to your case).
If you still decide to use this pattern, be sure to add {-# NOINLINE options #-}, otherwise it might get inlined and your program will fail! (And by this we're getting out of the guarantees given by the language and the type system and relying solely on the implementation of a particular compiler.)
This topic has been widely discussed and possible solutions are nicely summarized on Haskell Wiki in Top level mutable state. Currently it's not possible to safely abstract this pattern without some additional compiler support.
I often find this pattern in Haskell code:
Read different code.
As the programmer is lazy, he doesn't want to carry the options record all over the program. He defines an MVar to keep it - defined by an ugly use of unsafePerformIO. The programmer ensures, that the state is set only once and before any operation has taken place. Now each part of the program has to use unsafePerformIO again, just to extract the options.
Sounds like literally exactly what the reader monad accomplishes, except that the reader monad does it in a safe way. Instead of accommodating your own laziness, just write actual good code.

Resources