Expressing assertions more naturally - haskell

Suppose I write a function
f [x, y] = x + y
f [x, y, z] = z - x - y
This is filled out by the compiler with an extra line saying something like
f _ = error "pattern match failed"
If f is not exported, and I know it's only applied properly, and the function is performance-critical, I may want to avoid having an extra pattern in the production code. I could rewrite this rather unnaturally something like
f l = assert (atLeastTwo l) $
let (x,r1) = (unsafeHead l, unsafeTail l) in
let (y,r2) = (unsafeHead r1, unsafeTail r1) in
case r2 of
[] -> x + y
(z,r3) -> assert (r3 == []) $ z - x - y
What I'd like to do is write the original function definition with an extra line:
f _ = makeDemonsComeOutOfMyNose "This error is impossible."
The descriptively named magical function would be compiled as error when assertions or inferred safe Haskell are enabled, and marked as unreachable (rendering the pattern match unsafe) when assertions are disabled. Is there a way to do this, or something similar?
Edit
To address jberryman's concerns about whether there is a real performance impact:
This is a hypothetical question. I suspect that in more complicated cases, where there are multiple "can't happen" cases, there is likely to be a performance benefit—at the least, error cases can use extra space in the instruction cache.
Even if there isn't a real performance issue, I think there's also an expressive distinction between an assertion and an error. I suspect the most flexible assertion form is "this code should be unreachable", perhaps with an argument or three indicating how seriously the compiler should take that claim. Safety is relative—if a data structure invariant is broken and causes the program to leak confidential information, that's not necessarily any less serious than an invalid memory access. Note that, roughly speaking, assert p x = if p then x else makeDemonsFlyOutOfMyNose NO_REAL_DEMONS_PLEASE "assertion failed", but there's no way to define the demon function in terms of assert.

GHC is clever enough to optimize the unused pattern match away. Here's a simple program.
module Foo (foo) where
data List a = Nil | Cons a (List a)
link :: List a -> List a -> List a
link Nil _ = error "link: Nil"
link (Cons a _) xs = Cons a xs
l1 = Cons 'a' (Cons 'b' Nil)
foo = link l1
This is a very contrived example, but it demonstrates the case where GHC can prove that link (or in your case f) is being called on a statically known constructor (or can otherwise prove which pattern match will succeed via inlining, simplifying etc.)
And here's the Core output:
foo1 :: Char
foo1 = C# 'a'
foo :: List Char -> List Char
foo = \ (ds :: List Char) -> Cons foo1 ds
The error case doesn't show up anywhere in the Core for Foo. So you can be assured that in cases like this, there is absolutely no performance difference incurred by having an extra unused pattern match.

Related

Is pattern matching tuples not fully lazy?

I ran across some very unintuitive behavior in Haskell and am not sure if it is a bug in Haskell or not. If this is intended behavior, what specifically causes the infinite loop?
import Data.Function (fix)
main=putStrLn$show$fst$
fix (\rec-> (\(a1,a2)->(3,7)) rec)
Infinite loops, I assume because to pattern match rec into rec (a1,a2) it evaluates. But I feel like it shouldn't need to because everything will match. For example these all work correctly:
fix (\rec-> (\a->(3,7)) rec)
fix (\rec-> (\a->let (a1,a2)=a in (3,7)) rec)
Interestingly this exits with error
fix (\rec-> (\(a1,a2)->(3,7)) undefined)
But I could have sworn the original code where I encountered this behavior (which was more complicated) would actually work in this case. I could try to recreate it.
Correct, tuple pattern matches are not lazy. In fact, with two notable exceptions, no pattern match is lazy. To a first approximation, the "point" of pattern matching is to force evaluation of the scrutinee of the match.
Of course, a pattern match may appear in a place where laziness means it is never performed; for example,
const 3 (case loop of (_, _) -> "hi!")
will not loop. But that's not because the match is lazy -- rather, it's because const is lazy and never causes the match to be performed. Your let example is similar, as the semantics of let is that variables bound by it are not forced before evaluating the body of the let.
One of the two notable exceptions is a special pattern modifier, ~; putting ~ before a pattern says to make the match lazy and, consequently, is a claim to the compiler that the modified pattern always matches. (If it doesn't match, you get a crash once one of the bound variables is forced, not the usual fall-through-to-next-pattern behavior!) So, one fix for your fix is:
fix (\rec -> (\ ~(a1, a2) -> (3, 7)) rec)
Of course, the explicit name for rec isn't needed. You could just as well write:
fix (\ ~(a1, a2) -> (3, 7))
The other notable exception is newtype matching, which is not relevant here.
Tuple (or any other) pattern match is lazy if in let, not lazy if in case. apparently, a lambda application is reduced via case, not via let.
(correction: for a refutable pattern pat, (\pat -> ...) val is the same as (\x -> case x of pat -> ...) val == let x = val in case x of pat -> ..., so it's not that the reduction is done via case, but that the identity (\pat -> ...) == (\x -> case x of pat -> ...) must hold).
Thus your code reduces as
fix (\rec-> (\(a1,a2)->(3,7)) rec)
=
let { x = (\rec-> (\(a1,a2)->(3,7)) rec) x } in x
=
let { x = (\(a1,a2)->(3,7)) x } in x
=
let { x = (\y -> case y of (a1,a2)->(3,7)) x } in x
=
let { x = case x of {(a1,a2)->(3,7)} } in x
and that's why the pattern match for x against (_,_) is performed before it is revealed that it is would be (3,7) (but isn't!).
So, Haskell is not a declarative language. Its semantics is by least fixed point.

Eta-conversion changes semantics in a strict language

Take this OCaml code:
let silly (g : (int -> int) -> int) (f : int -> int -> int) =
g (f (print_endline "evaluated"; 0))
silly (fun _ -> 0) (fun x -> fun y -> x + y)
It prints evaluated and returns 0. But if I eta-expand f to get g (fun x -> f (print_endline "evaluated"; 0) x), evaluated is no longer printed.
Same holds for this SML code:
fun silly (g : (int -> int) -> int, f : int -> int -> int) : int =
g (f (print "evaluated" ; 0));
silly ((fn _ => 0), fn x => fn y => x + y);
On the other hand, this Haskell code doesn't print evaluated even with the strict pragma:
{-# LANGUAGE Strict #-}
import Debug.Trace
silly :: ((Int -> Int) -> Int) -> (Int -> Int -> Int) -> Int
silly g f = g (f (trace "evaluated" 0))
main = print $ silly (const 0) (+)
(I can make it, though, by using seq, which makes perfect sense for me)
While I understand that OCaml and SML do the right thing theoretically, are there any practical reason to prefer this behaviour to the "lazier" one? Eta-contraction is a common refactoring tool and I'm totally scared of using it in a strict language. I feel like I should paranoically eta-expand everything, just because otherwise arguments to partially applied functions can be evaluated when they're not supposed to. When is the "strict" behaviour useful?
Why and how does Haskell behave differently under the Strict pragma? Are there any references I can familiarize myself with to better understand the design space and pros and cons of the existing approaches?
To address the technical part of your question, eta-conversion also changes the meaning of expressions in lazy languages, you just need to consider the eta-rule of a different type constructor, e.g., + instead of ->.
This is the eta-rule for binary sums:
(case e of Lft y -> f (Lft y) | Rgt y -> f (Rgt y)) = f e (eta-+)
This equation holds under eager evaluation, because e will always be reduced on both sides. Under lazy evaluation, however, the r.h.s. only reduces e if f also forces it. That might make the l.h.s. diverge where the r.h.s. would not. So the equation does not hold in a lazy language.
To make it concrete in Haskell:
f x = 0
lhs = case undefined of Left y -> f (Left y); Right y -> f (Right y)
rhs = f undefined
Here, trying to print lhs will diverge, whereas rhs yields 0.
There is more that could be said about this, but the essence is that the equational theories of both evaluation regimes are sort of dual.
The underlying problem is that under a lazy regime, every type is inhabited by _|_ (non-termination), whereas under eager it is not. That has severe semantic consequences. In particular, there are no inductive types in Haskell, and you cannot prove termination of a structural recursive function, e.g., a list traversal.
There is a line of research in type theory distinguishing data types (strict) from codata types (non-strict) and providing both in a dual manner, thus giving the best of both worlds.
Edit: As for the question why a compiler should not eta-expand functions: that would utterly break every language. In a strict language with effects that's most obvious, because the ability to stage effects via multiple function abstractions is a feature. The simplest example perhaps is this:
let make_counter () =
let x = ref 0 in
fun () -> x := !x + 1; !x
let tick = make_counter ()
let n1 = tick ()
let n2 = tick ()
let n3 = tick ()
But effects are not the only reason. Eta-expansion can also drastically change the performance of a program! In the same way you sometimes want to stage effects you sometimes also want to stage work:
match :: String -> String -> Bool
match regex = \s -> run fsm s
where fsm = ...expensive transformation of regex...
matchFloat = match "[0-9]+(\.[0-9]*)?((e|E)(+|-)?[0-9]+)?"
Note that I used Haskell here, because this example shows that implicit eta-expansion is not desirable in either eager or lazy languages!
With respect to your final question (why does Haskell do this), the reason "Strict Haskell" behaves differently from a truly strict language is that the Strict extension doesn't really change the evaluation model from lazy to strict. It just makes a subset of bindings into "strict" bindings by default, and only in the limited Haskell sense of forcing evaluation to weak head normal form. Also, it only affects bindings made in the module with the extension turned on; it doesn't retroactively affect bindings made elsewhere. (Moreover, as described below, the strictness doesn't take effect in partial function application. The function needs to be fully applied before any arguments are forced.)
In your particular Haskell example, I believe the only effect of the Strict extension is as if you had explicitly written the following bang patterns in the definition of silly:
silly !g !f = g (f (trace "evaluated" 0))
It has no other effect. In particular, it doesn't make const or (+) strict in their arguments, nor does it generally change the semantics of function applications to make them eager.
So, when the term silly (const 0) (+) is forced by print, the only effect is to evaluate its arguments to WHNF as part of the function application of silly. The effect is similar to writing (in non-Strict Haskell):
let { g = const 0; f = (+) } in g `seq` f `seq` silly g f
Obviously, forcing g and f to their WHNFs (which are lambdas) isn't going to have any side effect, and when silly is applied, const 0 is still lazy in its remaining argument, so the resulting term is something like:
(\x -> 0) ((\x y -> <defn of plus>) (trace "evaluated" 0))
(which should be interpreted without the Strict extension -- these are all lazy bindings here), and there's nothing here that will force the side effect.
As noted above, there's another subtle issue that this example glosses over. Even if you had made everything in sight strict:
{-# LANGUAGE Strict #-}
import Debug.Trace
myConst :: a -> b -> a
myConst x y = x
myPlus :: Int -> Int -> Int
myPlus x y = x + y
silly :: ((Int -> Int) -> Int) -> (Int -> Int -> Int) -> Int
silly g f = g (f (trace "evaluated" 0))
main = print $ silly (myConst 0) myPlus
this still wouldn't have printed "evaluated". This is because, in the evaluation of silly when the strict version of myConst forces its second argument, that argument is a partial application of the strict version of myPlus, and myPlus won't force any of its arguments until it's been fully applied.
This also means that if you change the definition of myPlus to:
myPlus x = \y -> x + y -- now it will print "evaluated"
then you'll be able to largely reproduce the ML behavior. Because myPlus is now fully applied, it will force its argument, and this will print "evaluated". You can suppress it again eta-expanding f in the definition of silly:
silly g f = g (\x -> f (trace "evaluated" 0) x) -- now it won't
because now when myConst forces its second argument, that argument is already in WHNF (because it's a lambda), and we never get to the application of f, full or not.
In the end, I guess I wouldn't take "Haskell plus the Strict extension and unsafe side effects like trace" too seriously as a good point in the design space. Its semantics may be (barely) coherent, but they sure are weird. I think the only serious use case is when you have some code whose semantics "obviously" don't depend on lazy versus strict evaluation but where performance would be improved by a lot of forcing. Then, you can just turn on Strict for a performance boost without having to think too hard.

Why would you write Haskell like this?

I've been reading some Haskell code and keep seeing functions that look something like this:
ok :: a -> Result i w e a
ok a =
Result $ \i w _ good ->
     good i w a
Why is a lambda used? Why wouldn't you just write the following?:
ok :: a -> Result i w e a
ok a =
Result $ good i w a
This is continuation passing style or "CPS".
So first off, your alternative example doesn't make sense. good, i, and w are not known at the point they are used, and you will get an error.
The basic idea of continuation passing style is that instead of returning the relevant information, you instead call a function that you are given (in this case good), passing it your intended result as an argument. Presumably (based on the naming) the ignored argument _ would have been called bad, and it is a function that you would call in the case of failure.
If you are the ok function, it's like the difference between asking you to
Bake me a batch of cookies.
(where I have the intention of giving the cookies to Dave), and
Bake a batch of cookies and then give it to Dave.
which accomplishes the same thing but now I don't have to be a middleman anymore. There are often performance advantages to cutting me out as a middleman, and also it means you can do more things, for example if the batch of cookies is really good you might decide to give it to your mom instead of Dave (thus aborting whatever Dave would have done with them), or bake two batches and give them both to Dave (duplicating what Dave would have done). Sometimes you want this ability and other times you don't, it depends on context. (N.B. in the below examples the types are sufficiently general to disallow these possibilities)
Here is a very simple example of continuation passing style. Say you have a program
pred :: Integer -> Maybe Integer
pred n = if n > 0 then Just (n-1) else Nothing
which subtracts 1 from a number and returns it (in a Just constructor), unless it would become negative then it returns Nothing. You might use it like this:
main = do
x <- readLn
case x of
Just predx -> putStrLn $ "The predecessor is " ++ show predx
Nothing -> putStrLn "Can't take the predecessor"
We can encode this in continuation passing style by, instead of returning Maybe, have pred take an argument for what to do in each case:
pred :: Integer -> (Integer -> r) -> r -> r
-- ^^^^^^^^^^^^^^ ^
-- Just case |
-- Nothing case
pred n ifPositive ifNegative =
if n > 0
then ifPositive (n-1)
else ifNegative
And the usage becomes:
main = do
x <- readLn
pred x (\predx -> putStrLn $ "The predecessor is " ++ show predx)
(putStrLn "Can't take the predecessor)
See how that works? -- doing it the first way we got the result and then did case analysis; in the second way each case became an argument to the function. And in the process the call to pred became a tail call, eliminating the need for a stack frame and the intermediate Maybe data structure.
The only remaining problem is that the pred's signature is kind of confusing. We can make it a bit clearer by wrapping the CPS stuff in its own type constructor:
newtype CPSMaybe a = CPSMaybe (forall r. (a -> r) -> r -> r)
pred :: Integer -> CPSMaybe Integer
pred n = CPSMaybe $ \ifPositive ifNegative ->
if n > 0
then ifPositive (n-1)
else ifNegative
which has a signature that looks more like the first one but with code that looks like the second (except for the CPSMaybe newtype wrapper, which has no effect at runtime). And now maybe you can see the connection to the code in your question.
Well, the Result type apparently wraps a function, so a lambda is the natural thing to use here. If you wanted to avoid a lambda, you could use a local definition instead using let or where, e.g.:
ok a = let
proceed i w _ good = good i w a
in Result proceed
-- or --
ok a = Result proceed
where
proceed i w _ good = good i w a
Writing this won’t work because the variables i, w, and good are not in scope:
ok :: a -> Result i w e a
ok a =
Result $ good i w a
I wonder if the source of your confusion is the fact that i and w are also used as type variables in the signature of ok, but they’re different variables that happen to have the same names. It’s just as if you’d written something like this:
ok :: a -> Result i w e a
ok value =
Result $ continue index writer value
Here it should be obvious that the continue, index, and writer variables aren’t defined.
In the first sample, good, w and i are locally defined parameters to the lambda expression. In the second sample, they are free variables. I would expect the second sample to fail with an error saying that those identifiers are not in scope. Result apparently is a type that contains information about how to use given data and handlers. ok says to take the data and apply the handler indicating a good outcome to it. In the second sample, it is not clear that one is even refering to the arguments available to what Result wraps, or which names refer to which arguments.

Evaluation strategy

How should one reason about function evaluation in examples like the following in Haskell:
let f x = ...
x = ...
in map (g (f x)) xs
In GHC, sometimes (f x) is evaluated only once, and sometimes once for each element in xs, depending on what exactly f and g are. This can be important when f x is an expensive computation. It has just tripped a Haskell beginner I was helping and I didn't know what to tell him other than that it is up to the compiler. Is there a better story?
Update
In the following example (f x) will be evaluated 4 times:
let f x = trace "!" $ zip x x
x = "abc"
in map (\i -> lookup i (f x)) "abcd"
With language extensions, we can create situations where f x must be evaluated repeatedly:
{-# LANGUAGE GADTs, Rank2Types #-}
module MultiEvG where
data BI where
B :: (Bounded b, Integral b) => b -> BI
foo :: [BI] -> [Integer]
foo xs = let f :: (Integral c, Bounded c) => c -> c
f x = maxBound - x
g :: (forall a. (Integral a, Bounded a) => a) -> BI -> Integer
g m (B y) = toInteger (m + y)
x :: (Integral i) => i
x = 3
in map (g (f x)) xs
The crux is to have f x polymorphic even as the argument of g, and we must create a situation where the type(s) at which it is needed can't be predicted (my first stab used an Either a b instead of BI, but when optimising, that of course led to only two evaluations of f x at most).
A polymorphic expression must be evaluated at least once for each type it is used at. That's one reason for the monomorphism restriction. However, when the range of types it can be needed at is restricted, it is possible to memoise the values at each type, and in some circumstances GHC does that (needs optimising, and I expect the number of types involved mustn't be too large). Here we confront it with what is basically an inhomogeneous list, so in each invocation of g (f x), it can be needed at an arbitrary type satisfying the constraints, so the computation cannot be lifted outside the map (technically, the compiler could still build a cache of the values at each used type, so it would be evaluated only once per type, but GHC doesn't, in all likelihood it wouldn't be worth the trouble).
Monomorphic expressions need only be evaluated once, they can be shared. Whether they are is up to the implementation; by purity, it doesn't change the semantics of the programme. If the expression is bound to a name, in practice you can rely on it being shared, since it's easy and obviously what the programmer wants. If it isn't bound to a name, it's a question of optimisation. With the bytecode generator or without optimisations, the expression will often be evaluated repeatedly, but with optimisations repeated evaluation would indicate a compiler bug.
Polymorphic expressions must be evaluated at least once for every type they're used at, but with optimisations, when GHC can see that it may be used multiple times at the same type, it will (usually) still be shared for that type during a larger computation.
Bottom line: Always compile with optimisations, help the compiler by binding expressions you want shared to a name, and give monomorphic type signatures where possible.
Your examples are indeed quite different.
In the first example, the argument to map is g (f x) and is passed once to map most likely as partially applied function.
Should g (f x), when applied to an argument within map evaluate its first argument, then this will be done only once and then the thunk (f x) will be updated with the result.
Hence, in your first example, f xwill be evaluated at most 1 time.
Your second example requires a deeper analysis before the compiler can arrive at the conclusion that (f x) is always constant in the lambda expression. Perhaps it will never optimize it at all, because it may have knowledge that trace is not quite kosher. So, this may evaluate 4 times when tracing, and 4 times or 1 time when not tracing.
This is really dependent on GHC's optimizations, as you've been able to tell.
The best thing to do is to study the GHC core that you get after optimizing the program. I would look at the generated Core and examine whether f x had its own let statement outside the map or not.
If you want to be sure, then you should factor f x out into its own variable assigned in a let, but there's not really a guaranteed way to figure it out other than reading through Core.
All that said, with the exception of things like trace that use unsafePerformIO, this will never change the semantics of your program: how it actually behaves.
In GHC without optimizations, the body of a function is evaluated every time the function is called. (A "call" means the function is applied to arguments and the result is evaluated.) In the following example, f x is inside a function, so it will execute each time the function is called.
(GHC may optimize this expression as discussed in the FAQ [1].)
let f x = trace "!" $ zip x x
x = "abc"
in map (\i -> lookup i (f x)) "abcd"
However, if we move f x out of the function, it will execute only once.
let f x = trace "!" $ zip x x
x = "abc"
in map ((\f_x i -> lookup i f_x) (f x)) "abcd"
This can be rewritten more readably as
let f x = trace "!" $ zip x x
x = "abc"
g f_x i = lookup i f_x
in map (g (f x)) "abcd"
The general rule is that, each time a function is applied to an argument, a new "copy" of the function body is created. Function application is the only thing that may cause an expression to re-execute. However, be warned that some functions and function calls do not look like functions syntactically.
[1] http://www.haskell.org/haskellwiki/GHC/FAQ#Subexpression_Elimination

Loop through a set of functions with Haskell

Here's a simple, barebones example of how the code that I'm trying to do would look in C++.
while (state == true) {
a = function1();
b = function2();
state = function3();
}
In the program I'm working on, I have some functions that I need to loop through until bool state equals false (or until one of the variables, let's say variable b, equals 0).
How would this code be done in Haskell? I've searched through here, Google, and even Bing and haven't been able to find any clear, straight forward explanations on how to do repetitive actions with functions.
Any help would be appreciated.
Taking Daniels comment into account, it could look something like this:
f = loop init_a init_b true
where
loop a b True = loop a' b' (fun3 a' b')
where
a' = fun1 ....
b' = fun2 .....
loop a b False = (a,b)
Well, here's a suggestion of how to map the concepts here:
A C++ loop is some form of list operation in Haskell.
One iteration of the loop = handling one element of the list.
Looping until a certain condition becomes true = base case of a function that recurses on a list.
But there is something that is critically different between imperative loops and functional list functions: loops describe how to iterate; higher-order list functions describe the structure of the computation. So for example, map f [a0, a1, ..., an] can be described by this diagram:
[a0, a1, ..., an]
| | |
f f f
| | |
v v v
[f a0, f a1, ..., f an]
Note that this describes how the result is related to the arguments f and [a0, a1, ..., an], not how the iteration is performed step by step.
Likewise, foldr f z [a0, a1, ..., an] corresponds to this:
f a0 (f a1 (... (f an z)))
filter doesn't quite lend itself to diagramming, but it's easy to state many rules that it satisfies:
length (filter pred xs) <= length xs
For every element x of filter pred xs, pred x is True.
If x is an element of filter pred xs, then x is an element of xs
If x is not an element of xs, then x is not an element of filter pred xs
If x appears before x' in filter pred xs, then x appears before x' in xs
If x appears before x' in xs, and both x and x' appear in filter pred xs, then x appears before x' in filter pred xs
In a classic imperative program, all three of these cases are written as loops, and the difference between them comes down to what the loop body does. Functional programming, on the contrary, insists that this sort of structural pattern does not belong in "loop bodies" (the functions f and pred in these examples); rather, these patterns are best abstracted out into higher-order functions like map, foldr and filter. Thus, every time you see one of these list functions you instantly know some important facts about how the arguments and the result are related, without having to read any code; whereas in a typical imperative program, you must read the bodies of loops to figure this stuff out.
So the real answer to your question is that it's impossible to offer an idiomatic translation of an imperative loop into functional terms without knowing what the loop body is doing—what are the preconditions supposed to be before the loop runs, and what the postconditions are supposed to be when the loop finishes. Because that loop body that you only described vaguely is going to determine what the structure of the computation is, and different such structures will call for different higher-order functions in Haskell.
First of all, let's think about a few things.
Does function1 have side effects?
Does function2 have side effects?
Does function3 have side effects?
The answer to all of these is a resoundingly obvious YES, because they take no inputs, and presumably there are circumstances which cause you to go around the while loop more than once (rather than def function3(): return false). Now let's remodel these functions with explicit state.
s = initialState
sentinel = true
while(sentinel):
a,b,s,sentinel = function1(a,b,s,sentinel)
a,b,s,sentinel = function2(a,b,s,sentinel)
a,b,s,sentinel = function3(a,b,s,sentinel)
return a,b,s
Well that's rather ugly. We know absolutely nothing about what inputs each function draws from, nor do we know anything about how these functions might affect the variables a, b, and sentinel, nor "any other state" which I have simply modeled as s.
So let's make a few assumptions. Firstly, I am going to assume that these functions do not directly depend on nor affect in any way the values of a, b, and sentinel. They might, however, change the "other state". So here's what we get:
s = initState
sentinel = true
while (sentinel):
a,s2 = function1(s)
b,s3 = function2(s2)
sentinel,s4 = function(s3)
s = s4
return a,b,s
Notice I've used temporary variables s2, s3, and s4 to indicate the changes that the "other state" goes through. Haskell time. We need a control function to behave like a while loop.
myWhile :: s -- an initial state
-> (s -> (Bool, a, s)) -- given a state, produces a sentinel, a current result, and the next state
-> (a, s) -- the result, plus resultant state
myWhile s f = case f s of
(False, a, s') -> (a, s')
(True, _, s') -> myWhile s' f
Now how would one use such a function? Well, given we have the functions:
function1 :: MyState -> (AType, MyState)
function2 :: MyState -> (BType, MyState)
function3 :: MyState -> (Bool, MyState)
We would construct the desired code as follows:
thatCodeBlockWeAreTryingToSimulate :: MyState -> ((AType, BType), MyState)
thatCodeBlockWeAreTryingToSimulate initState = myWhile initState f
where f :: MyState -> (Bool, (AType, BType), MyState)
f s = let (a, s2) = function1 s
(b, s3) = function2 s2
(sentinel, s4) = function3 s3
in (sentinel, (a, b), s4)
Notice how similar this is to the non-ugly python-like code given above.
You can verify that the code I have presented is well-typed by adding function1 = undefined etc for the three functions, as well as the following at the top of the file:
{-# LANGUAGE EmptyDataDecls #-}
data MyState
data AType
data BType
So the takeaway message is this: in Haskell, you must explicitly model the changes in state. You can use the "State Monad" to make things a little prettier, but you should first understand the idea of passing state around.
Lets take a look at your C++ loop:
while (state == true) {
a = function1();
b = function2();
state = function3();
}
Haskell is a pure functional language, so it won't fight us as much (and the resulting code will be more useful, both in itself and as an exercise to learn Haskell) if we try to do this without side effects, and without using monads to make it look like we're using side effects either.
Lets start with this structure
while (state == true) {
<<do stuff that updates state>>
}
In Haskell we're obviously not going to be checking a variable against true as the loop condition, because it can't change its value[1] and we'd either evaluate the loop body forever or never. So instead, we'll want to be evaluating a function that returns a boolean value on some argument:
while (check something == True) {
<<do stuff that updates state>>
}
Well, now we don't have a state variable, so that "do stuff that updates state" is looking pretty pointless. And we don't have a something to pass to check. Lets think about this a bit more. We want the something to be checked to depend on what the "do stuff" bit is doing. We don't have side effects, so that means something has to be (or be derived from) returned from the "do stuff". "do stuff" also needs to take something that varies as an argument, or it'll just keep returning the same thing forever, which is also pointless. We also need to return a value out all this, otherwise we're just burning CPU cycles (again, with no side effects there's no point running a function if we don't use its output in some way, and there's even less point running a function repeatedly if we never use its output).
So how about something like this:
while check func state =
let next_state = func state in
if check next_state
then while check func next_state
else next_state
Lets try it in GHCi:
*Main> while (<20) (+1) 0
20
This is the result of applying (+1) repeatedly while the result is less than 20, starting from 0.
*Main> while ((<20) . length) (++ "zob") ""
"zobzobzobzobzobzobzob"
This is the result of concatenating "zob" repeatedly while the result's length is less than 20, starting from the empty string.
So you can see I've defined a function that is (sort of a bit) analogous to a while loop from imperative languages. We didn't even need dedicated loop syntax for it! (which is the real reason Haskell has no such syntax; if you need this kind of thing you can express it as a function). It's not the only way to do so, and experienced Haskell programmers would probably use other standard library functions to do this kind of job, rather than writing while.
But I think it's useful to see how you can express this kind of thing in Haskell. It does show that you can't translate things like imperative loops directly into Haskell; I didn't end up translating your loop in terms of my while because it ends up pretty pointless; you never use the result of function1 or function2, they're called with no arguments so they'd always return the same thing in every iteration, and function3 likewise always returns the same thing, and can only return true or false to either cause while to keep looping or stop, with no information resulting.
Presumably in the C++ program they're all using side effects to actually get some work done. If they operate on in-memory things then you need to translate a bigger chunk of your program at once to Haskell for the translation of this loop to make any sense. If those functions are doing IO then you'll need to do this in the IO monad in Haskell, for which my while function doesn't work, but you can do something similar.
[1] As an aside, it's worth trying to understand that "you can't change variables" in Haskell isn't just an arbitrary restriction, nor is it just an acceptable trade off for the benefits of purity, it is a concept that doesn't make sense the way Haskell wants you to think about Haskell code. You're writing down expressions that result from evaluating functions on certain arguments: in f x = x + 1 you're saying that f x is x + 1. If you really think of it that way rather than thinking "f takes x, then adds one to it, then returns the result" then the concept of "having side effects" doesn't even apply; how could something existing and being equal to something else somehow change a variable, or have some other side effect?
You should write a solution to your problem in a more functional approach.
However, some code in haskell works a lot like imperative looping, take for example state monads, terminal recursivity, until, foldr, etc.
A simple example is the factorial. In C, you would write a loop where in haskell you can for example write fact n = foldr (*) 1 [2..n].
If you've two functions f :: a -> b and g :: b -> c where a, b, and c are types like String or [Int] then you can compose them simply by writing f . b.
If you wish them to loop over a list or vector you could write map (f . g) or V.map (f . g), assuming you've done Import qualified Data.Vector as V.
Example : I wish to print a list of markdown headings like ## <number>. <heading> ## but I need roman numerals numbered from 1 and my list headings has type type [(String,Double)] where the Double is irrelevant.
Import Data.List
Import Text.Numeral.Roman
let fun = zipWith (\a b -> a ++ ". " ++ b ++ "##\n") (map toRoman [1..]) . map fst
fun [("Foo",3.5),("Bar",7.1)]
What the hell does this do?
toRoman turns a number into a string containing the roman numeral. map toRoman does this to every element of a loop. map toRoman [1..] does it to every element of the lazy infinite list [1,2,3,4,..], yielding a lazy infinite list of roman numeral strings
fst :: (a,b) -> a simply extracts the first element of a tuple. map fst throws away our silly Meow information along the entire list.
\a b -> "##" ++ show a ++ ". " ++ b ++ "##" is a lambda expression that takes two strings and concatenates them together within the desired formatting strings.
zipWith :: (a -> b -> c) -> [a] -> [b] -> [c] takes a two argument function like our lambda expression and feeds it pairs of elements from it's own second and third arguments.
You'll observe that zip, zipWith, etc. only read as much of the lazy infinite list of Roman numerals as needed for the list of headings, meaning I've number my headings without maintaining any counter variable.
Finally, I have declared fun without naming it's argument because the compiler can figure it out from the fact that map fst requires one argument. You'll notice that put a . before my second map too. I could've written (map fst h) or $ map fst h instead if I'd written fun h = ..., but leaving the argument off fun meant I needed to compose it with zipWith after applying zipWith to two arguments of the three arguments zipWith wants.
I'd hope the compiler combines the zipWith and maps into one single loop via inlining.

Resources