Is pattern matching tuples not fully lazy? - haskell

I ran across some very unintuitive behavior in Haskell and am not sure if it is a bug in Haskell or not. If this is intended behavior, what specifically causes the infinite loop?
import Data.Function (fix)
main=putStrLn$show$fst$
fix (\rec-> (\(a1,a2)->(3,7)) rec)
Infinite loops, I assume because to pattern match rec into rec (a1,a2) it evaluates. But I feel like it shouldn't need to because everything will match. For example these all work correctly:
fix (\rec-> (\a->(3,7)) rec)
fix (\rec-> (\a->let (a1,a2)=a in (3,7)) rec)
Interestingly this exits with error
fix (\rec-> (\(a1,a2)->(3,7)) undefined)
But I could have sworn the original code where I encountered this behavior (which was more complicated) would actually work in this case. I could try to recreate it.

Correct, tuple pattern matches are not lazy. In fact, with two notable exceptions, no pattern match is lazy. To a first approximation, the "point" of pattern matching is to force evaluation of the scrutinee of the match.
Of course, a pattern match may appear in a place where laziness means it is never performed; for example,
const 3 (case loop of (_, _) -> "hi!")
will not loop. But that's not because the match is lazy -- rather, it's because const is lazy and never causes the match to be performed. Your let example is similar, as the semantics of let is that variables bound by it are not forced before evaluating the body of the let.
One of the two notable exceptions is a special pattern modifier, ~; putting ~ before a pattern says to make the match lazy and, consequently, is a claim to the compiler that the modified pattern always matches. (If it doesn't match, you get a crash once one of the bound variables is forced, not the usual fall-through-to-next-pattern behavior!) So, one fix for your fix is:
fix (\rec -> (\ ~(a1, a2) -> (3, 7)) rec)
Of course, the explicit name for rec isn't needed. You could just as well write:
fix (\ ~(a1, a2) -> (3, 7))
The other notable exception is newtype matching, which is not relevant here.

Tuple (or any other) pattern match is lazy if in let, not lazy if in case. apparently, a lambda application is reduced via case, not via let.
(correction: for a refutable pattern pat, (\pat -> ...) val is the same as (\x -> case x of pat -> ...) val == let x = val in case x of pat -> ..., so it's not that the reduction is done via case, but that the identity (\pat -> ...) == (\x -> case x of pat -> ...) must hold).
Thus your code reduces as
fix (\rec-> (\(a1,a2)->(3,7)) rec)
=
let { x = (\rec-> (\(a1,a2)->(3,7)) rec) x } in x
=
let { x = (\(a1,a2)->(3,7)) x } in x
=
let { x = (\y -> case y of (a1,a2)->(3,7)) x } in x
=
let { x = case x of {(a1,a2)->(3,7)} } in x
and that's why the pattern match for x against (_,_) is performed before it is revealed that it is would be (3,7) (but isn't!).
So, Haskell is not a declarative language. Its semantics is by least fixed point.

Related

Eta-conversion changes semantics in a strict language

Take this OCaml code:
let silly (g : (int -> int) -> int) (f : int -> int -> int) =
g (f (print_endline "evaluated"; 0))
silly (fun _ -> 0) (fun x -> fun y -> x + y)
It prints evaluated and returns 0. But if I eta-expand f to get g (fun x -> f (print_endline "evaluated"; 0) x), evaluated is no longer printed.
Same holds for this SML code:
fun silly (g : (int -> int) -> int, f : int -> int -> int) : int =
g (f (print "evaluated" ; 0));
silly ((fn _ => 0), fn x => fn y => x + y);
On the other hand, this Haskell code doesn't print evaluated even with the strict pragma:
{-# LANGUAGE Strict #-}
import Debug.Trace
silly :: ((Int -> Int) -> Int) -> (Int -> Int -> Int) -> Int
silly g f = g (f (trace "evaluated" 0))
main = print $ silly (const 0) (+)
(I can make it, though, by using seq, which makes perfect sense for me)
While I understand that OCaml and SML do the right thing theoretically, are there any practical reason to prefer this behaviour to the "lazier" one? Eta-contraction is a common refactoring tool and I'm totally scared of using it in a strict language. I feel like I should paranoically eta-expand everything, just because otherwise arguments to partially applied functions can be evaluated when they're not supposed to. When is the "strict" behaviour useful?
Why and how does Haskell behave differently under the Strict pragma? Are there any references I can familiarize myself with to better understand the design space and pros and cons of the existing approaches?
To address the technical part of your question, eta-conversion also changes the meaning of expressions in lazy languages, you just need to consider the eta-rule of a different type constructor, e.g., + instead of ->.
This is the eta-rule for binary sums:
(case e of Lft y -> f (Lft y) | Rgt y -> f (Rgt y)) = f e (eta-+)
This equation holds under eager evaluation, because e will always be reduced on both sides. Under lazy evaluation, however, the r.h.s. only reduces e if f also forces it. That might make the l.h.s. diverge where the r.h.s. would not. So the equation does not hold in a lazy language.
To make it concrete in Haskell:
f x = 0
lhs = case undefined of Left y -> f (Left y); Right y -> f (Right y)
rhs = f undefined
Here, trying to print lhs will diverge, whereas rhs yields 0.
There is more that could be said about this, but the essence is that the equational theories of both evaluation regimes are sort of dual.
The underlying problem is that under a lazy regime, every type is inhabited by _|_ (non-termination), whereas under eager it is not. That has severe semantic consequences. In particular, there are no inductive types in Haskell, and you cannot prove termination of a structural recursive function, e.g., a list traversal.
There is a line of research in type theory distinguishing data types (strict) from codata types (non-strict) and providing both in a dual manner, thus giving the best of both worlds.
Edit: As for the question why a compiler should not eta-expand functions: that would utterly break every language. In a strict language with effects that's most obvious, because the ability to stage effects via multiple function abstractions is a feature. The simplest example perhaps is this:
let make_counter () =
let x = ref 0 in
fun () -> x := !x + 1; !x
let tick = make_counter ()
let n1 = tick ()
let n2 = tick ()
let n3 = tick ()
But effects are not the only reason. Eta-expansion can also drastically change the performance of a program! In the same way you sometimes want to stage effects you sometimes also want to stage work:
match :: String -> String -> Bool
match regex = \s -> run fsm s
where fsm = ...expensive transformation of regex...
matchFloat = match "[0-9]+(\.[0-9]*)?((e|E)(+|-)?[0-9]+)?"
Note that I used Haskell here, because this example shows that implicit eta-expansion is not desirable in either eager or lazy languages!
With respect to your final question (why does Haskell do this), the reason "Strict Haskell" behaves differently from a truly strict language is that the Strict extension doesn't really change the evaluation model from lazy to strict. It just makes a subset of bindings into "strict" bindings by default, and only in the limited Haskell sense of forcing evaluation to weak head normal form. Also, it only affects bindings made in the module with the extension turned on; it doesn't retroactively affect bindings made elsewhere. (Moreover, as described below, the strictness doesn't take effect in partial function application. The function needs to be fully applied before any arguments are forced.)
In your particular Haskell example, I believe the only effect of the Strict extension is as if you had explicitly written the following bang patterns in the definition of silly:
silly !g !f = g (f (trace "evaluated" 0))
It has no other effect. In particular, it doesn't make const or (+) strict in their arguments, nor does it generally change the semantics of function applications to make them eager.
So, when the term silly (const 0) (+) is forced by print, the only effect is to evaluate its arguments to WHNF as part of the function application of silly. The effect is similar to writing (in non-Strict Haskell):
let { g = const 0; f = (+) } in g `seq` f `seq` silly g f
Obviously, forcing g and f to their WHNFs (which are lambdas) isn't going to have any side effect, and when silly is applied, const 0 is still lazy in its remaining argument, so the resulting term is something like:
(\x -> 0) ((\x y -> <defn of plus>) (trace "evaluated" 0))
(which should be interpreted without the Strict extension -- these are all lazy bindings here), and there's nothing here that will force the side effect.
As noted above, there's another subtle issue that this example glosses over. Even if you had made everything in sight strict:
{-# LANGUAGE Strict #-}
import Debug.Trace
myConst :: a -> b -> a
myConst x y = x
myPlus :: Int -> Int -> Int
myPlus x y = x + y
silly :: ((Int -> Int) -> Int) -> (Int -> Int -> Int) -> Int
silly g f = g (f (trace "evaluated" 0))
main = print $ silly (myConst 0) myPlus
this still wouldn't have printed "evaluated". This is because, in the evaluation of silly when the strict version of myConst forces its second argument, that argument is a partial application of the strict version of myPlus, and myPlus won't force any of its arguments until it's been fully applied.
This also means that if you change the definition of myPlus to:
myPlus x = \y -> x + y -- now it will print "evaluated"
then you'll be able to largely reproduce the ML behavior. Because myPlus is now fully applied, it will force its argument, and this will print "evaluated". You can suppress it again eta-expanding f in the definition of silly:
silly g f = g (\x -> f (trace "evaluated" 0) x) -- now it won't
because now when myConst forces its second argument, that argument is already in WHNF (because it's a lambda), and we never get to the application of f, full or not.
In the end, I guess I wouldn't take "Haskell plus the Strict extension and unsafe side effects like trace" too seriously as a good point in the design space. Its semantics may be (barely) coherent, but they sure are weird. I think the only serious use case is when you have some code whose semantics "obviously" don't depend on lazy versus strict evaluation but where performance would be improved by a lot of forcing. Then, you can just turn on Strict for a performance boost without having to think too hard.

Non linear patterns in quasi-quotes

I followed this tutorial to implement a quasi quoted DSL, and I now want to support non-linear patterns in a quoted pattern. That will allow a repeated binder in a pattern to assert the equality of the matched data. For example, one can then write eval [expr| $a + $a|] = 2 * eval a. I modified antiExprPat as follows:
antiExpPat (MetaExp s) =
Just (do b <- lookupValueName s
let n = mkName s
p0 = VarP n
p1 <- (viewP [|(== $(varE n))|] [p|True|])
let res = case b of Nothing -> p0
_ -> p1
return res)
antiExpPat _ = Nothing
The idea is to use lookupValueName to check if the anti-quoted name s is in scope. If not, then just create a binder with the same name. Otherwise, create a view pattern (== s) -> True that asserts the matched data equals to the data already bound to s. Essentially, I want to convert the quoted pattern [expr| $a + $a |] to the Haskell pattern (Add a ((== a) -> True)).
But that didn't work. The resulting Haskell pattern is Add a a, which means lookupValueName never thinks a is in scope. Am I misunderstanding how lookupValueName works? Or is there a better way to implement non linear patterns here?
The full code is here if you want to play with it. In short, I'm making a quasi quoter to match on Java source.
Update 1:
As #chi pointed out, lookupValueName only checks for the splice's context, whereas I need to check for the splice's content. Any idea how to proceed with that?
Update 2:
So I bit the bullet and threaded the set of in-scope names with a state monad, and traversed the parse tree with transformM which replaces every in-scope meta-variable x with ((== x) -> True):
dataToPatQ (const Nothing `extQ` ...) (evalState (rename s) DS.empty)
...
rename :: Language.Java.Syntax.Stmt -> State (DS.Set String) Language.Java.Syntax.Stmt
rename p = transformM rnvar p
where rnvar (MetaStmt n) = do s <- get
let res = if DS.member n s
then (SAssertEq n)
else (MetaStmt n)
put (DS.insert n s)
return res
rnvar x = return x
It got the right result on the inputs I have, but I have no idea if it is correct, especially given transformM traverses the tree bottom-up so inner meta-variables may be added to the set first.

Expressing assertions more naturally

Suppose I write a function
f [x, y] = x + y
f [x, y, z] = z - x - y
This is filled out by the compiler with an extra line saying something like
f _ = error "pattern match failed"
If f is not exported, and I know it's only applied properly, and the function is performance-critical, I may want to avoid having an extra pattern in the production code. I could rewrite this rather unnaturally something like
f l = assert (atLeastTwo l) $
let (x,r1) = (unsafeHead l, unsafeTail l) in
let (y,r2) = (unsafeHead r1, unsafeTail r1) in
case r2 of
[] -> x + y
(z,r3) -> assert (r3 == []) $ z - x - y
What I'd like to do is write the original function definition with an extra line:
f _ = makeDemonsComeOutOfMyNose "This error is impossible."
The descriptively named magical function would be compiled as error when assertions or inferred safe Haskell are enabled, and marked as unreachable (rendering the pattern match unsafe) when assertions are disabled. Is there a way to do this, or something similar?
Edit
To address jberryman's concerns about whether there is a real performance impact:
This is a hypothetical question. I suspect that in more complicated cases, where there are multiple "can't happen" cases, there is likely to be a performance benefit—at the least, error cases can use extra space in the instruction cache.
Even if there isn't a real performance issue, I think there's also an expressive distinction between an assertion and an error. I suspect the most flexible assertion form is "this code should be unreachable", perhaps with an argument or three indicating how seriously the compiler should take that claim. Safety is relative—if a data structure invariant is broken and causes the program to leak confidential information, that's not necessarily any less serious than an invalid memory access. Note that, roughly speaking, assert p x = if p then x else makeDemonsFlyOutOfMyNose NO_REAL_DEMONS_PLEASE "assertion failed", but there's no way to define the demon function in terms of assert.
GHC is clever enough to optimize the unused pattern match away. Here's a simple program.
module Foo (foo) where
data List a = Nil | Cons a (List a)
link :: List a -> List a -> List a
link Nil _ = error "link: Nil"
link (Cons a _) xs = Cons a xs
l1 = Cons 'a' (Cons 'b' Nil)
foo = link l1
This is a very contrived example, but it demonstrates the case where GHC can prove that link (or in your case f) is being called on a statically known constructor (or can otherwise prove which pattern match will succeed via inlining, simplifying etc.)
And here's the Core output:
foo1 :: Char
foo1 = C# 'a'
foo :: List Char -> List Char
foo = \ (ds :: List Char) -> Cons foo1 ds
The error case doesn't show up anywhere in the Core for Foo. So you can be assured that in cases like this, there is absolutely no performance difference incurred by having an extra unused pattern match.

What's with the 'in' keyword?

In Haskell, why do you not use 'in' with 'let' inside of a do-block, but you must otherwise?
For example, in the somewhat contrived examples below:
afunc :: Int -> Int
afunc a =
let x = 9 in
a * x
amfunc :: IO Int -> IO Int
amfunc a = do
let x = 9
a' <- a
return (a' * x)
It's an easy enough rule to remember, but I just don't understand the reason for it.
You are providing expressions to define both afunc and amfunc. Let-expressions and do-blocks are both expressions. However, while a let-expression introduces a new binding that scopes around the expression given after the 'in' keyword, a do-block isn't made of expressions: it is a sequence of statements. There are three forms of statements in a do-block:
a computation whose result is bound to some variable x, as in
x <- getChar
a computation whose result is ignored, as in
putStrLn "hello"
A let-statement, as in
let x = 3 + 5
A let-statement introduces a new binding, just as let-expressions do. The scope of this new binding extends over all the remaining statements in the do-block.
In short, what comes after the 'in' in a let-expression is an expression, whereas what comes after a let expression is a sequence of statements. I can of course express a computation of a particular statement using a let-expression, but then the scope of the binding would not extend beyond that statement to statements that follow. Consider:
do putStrLn "hello"
let x = 3 + 5 in putStrLn "eight"
putStrLn (show x)
The above code causes the following error message in GHC:
Not in scope: `x'
whereas
do putStrLn "hello"
let x = 3 + 5
putStrLn "eight"
putStrLn (show x)
works fine.
You can indeed use let .. in in do-notation. In fact, according to the Haskell Report, the following
do{let decls; stmts}
desugars into
let decls in do {stmts}
I imagine that it is useful because you might otherwise have to have some deep indentation or delimiting of the "in"-block, going from your in .. to the very end of the do-block.
The short answer is that Haskell do blocks are funny. Haskell is an expression-based language—except in do blocks, because the point of do blocks is to provide for a "statement" syntax of sorts. Most "statements" are just expressions of type Monad m => m a, but there are two syntaxes that don't correspond to anything else in the language:
Binding the result of an action with <-: x <- action is a "statement" but not an expression. This syntax requires x :: a and action :: Monad m => m a.
The in-less variant of let, which is like an assignment statement (but for pure code on the right hand side). In let x = expr, it must be the case that x :: a and expr :: a.
Note that just like uses of <- can be desugared (in that case, into >>= and lambda), the in-less let can always be desugared into the regular let ... in ...:
do { let x = blah; ... }
=> let x = blah in do { ... }

Loop through a set of functions with Haskell

Here's a simple, barebones example of how the code that I'm trying to do would look in C++.
while (state == true) {
a = function1();
b = function2();
state = function3();
}
In the program I'm working on, I have some functions that I need to loop through until bool state equals false (or until one of the variables, let's say variable b, equals 0).
How would this code be done in Haskell? I've searched through here, Google, and even Bing and haven't been able to find any clear, straight forward explanations on how to do repetitive actions with functions.
Any help would be appreciated.
Taking Daniels comment into account, it could look something like this:
f = loop init_a init_b true
where
loop a b True = loop a' b' (fun3 a' b')
where
a' = fun1 ....
b' = fun2 .....
loop a b False = (a,b)
Well, here's a suggestion of how to map the concepts here:
A C++ loop is some form of list operation in Haskell.
One iteration of the loop = handling one element of the list.
Looping until a certain condition becomes true = base case of a function that recurses on a list.
But there is something that is critically different between imperative loops and functional list functions: loops describe how to iterate; higher-order list functions describe the structure of the computation. So for example, map f [a0, a1, ..., an] can be described by this diagram:
[a0, a1, ..., an]
| | |
f f f
| | |
v v v
[f a0, f a1, ..., f an]
Note that this describes how the result is related to the arguments f and [a0, a1, ..., an], not how the iteration is performed step by step.
Likewise, foldr f z [a0, a1, ..., an] corresponds to this:
f a0 (f a1 (... (f an z)))
filter doesn't quite lend itself to diagramming, but it's easy to state many rules that it satisfies:
length (filter pred xs) <= length xs
For every element x of filter pred xs, pred x is True.
If x is an element of filter pred xs, then x is an element of xs
If x is not an element of xs, then x is not an element of filter pred xs
If x appears before x' in filter pred xs, then x appears before x' in xs
If x appears before x' in xs, and both x and x' appear in filter pred xs, then x appears before x' in filter pred xs
In a classic imperative program, all three of these cases are written as loops, and the difference between them comes down to what the loop body does. Functional programming, on the contrary, insists that this sort of structural pattern does not belong in "loop bodies" (the functions f and pred in these examples); rather, these patterns are best abstracted out into higher-order functions like map, foldr and filter. Thus, every time you see one of these list functions you instantly know some important facts about how the arguments and the result are related, without having to read any code; whereas in a typical imperative program, you must read the bodies of loops to figure this stuff out.
So the real answer to your question is that it's impossible to offer an idiomatic translation of an imperative loop into functional terms without knowing what the loop body is doing—what are the preconditions supposed to be before the loop runs, and what the postconditions are supposed to be when the loop finishes. Because that loop body that you only described vaguely is going to determine what the structure of the computation is, and different such structures will call for different higher-order functions in Haskell.
First of all, let's think about a few things.
Does function1 have side effects?
Does function2 have side effects?
Does function3 have side effects?
The answer to all of these is a resoundingly obvious YES, because they take no inputs, and presumably there are circumstances which cause you to go around the while loop more than once (rather than def function3(): return false). Now let's remodel these functions with explicit state.
s = initialState
sentinel = true
while(sentinel):
a,b,s,sentinel = function1(a,b,s,sentinel)
a,b,s,sentinel = function2(a,b,s,sentinel)
a,b,s,sentinel = function3(a,b,s,sentinel)
return a,b,s
Well that's rather ugly. We know absolutely nothing about what inputs each function draws from, nor do we know anything about how these functions might affect the variables a, b, and sentinel, nor "any other state" which I have simply modeled as s.
So let's make a few assumptions. Firstly, I am going to assume that these functions do not directly depend on nor affect in any way the values of a, b, and sentinel. They might, however, change the "other state". So here's what we get:
s = initState
sentinel = true
while (sentinel):
a,s2 = function1(s)
b,s3 = function2(s2)
sentinel,s4 = function(s3)
s = s4
return a,b,s
Notice I've used temporary variables s2, s3, and s4 to indicate the changes that the "other state" goes through. Haskell time. We need a control function to behave like a while loop.
myWhile :: s -- an initial state
-> (s -> (Bool, a, s)) -- given a state, produces a sentinel, a current result, and the next state
-> (a, s) -- the result, plus resultant state
myWhile s f = case f s of
(False, a, s') -> (a, s')
(True, _, s') -> myWhile s' f
Now how would one use such a function? Well, given we have the functions:
function1 :: MyState -> (AType, MyState)
function2 :: MyState -> (BType, MyState)
function3 :: MyState -> (Bool, MyState)
We would construct the desired code as follows:
thatCodeBlockWeAreTryingToSimulate :: MyState -> ((AType, BType), MyState)
thatCodeBlockWeAreTryingToSimulate initState = myWhile initState f
where f :: MyState -> (Bool, (AType, BType), MyState)
f s = let (a, s2) = function1 s
(b, s3) = function2 s2
(sentinel, s4) = function3 s3
in (sentinel, (a, b), s4)
Notice how similar this is to the non-ugly python-like code given above.
You can verify that the code I have presented is well-typed by adding function1 = undefined etc for the three functions, as well as the following at the top of the file:
{-# LANGUAGE EmptyDataDecls #-}
data MyState
data AType
data BType
So the takeaway message is this: in Haskell, you must explicitly model the changes in state. You can use the "State Monad" to make things a little prettier, but you should first understand the idea of passing state around.
Lets take a look at your C++ loop:
while (state == true) {
a = function1();
b = function2();
state = function3();
}
Haskell is a pure functional language, so it won't fight us as much (and the resulting code will be more useful, both in itself and as an exercise to learn Haskell) if we try to do this without side effects, and without using monads to make it look like we're using side effects either.
Lets start with this structure
while (state == true) {
<<do stuff that updates state>>
}
In Haskell we're obviously not going to be checking a variable against true as the loop condition, because it can't change its value[1] and we'd either evaluate the loop body forever or never. So instead, we'll want to be evaluating a function that returns a boolean value on some argument:
while (check something == True) {
<<do stuff that updates state>>
}
Well, now we don't have a state variable, so that "do stuff that updates state" is looking pretty pointless. And we don't have a something to pass to check. Lets think about this a bit more. We want the something to be checked to depend on what the "do stuff" bit is doing. We don't have side effects, so that means something has to be (or be derived from) returned from the "do stuff". "do stuff" also needs to take something that varies as an argument, or it'll just keep returning the same thing forever, which is also pointless. We also need to return a value out all this, otherwise we're just burning CPU cycles (again, with no side effects there's no point running a function if we don't use its output in some way, and there's even less point running a function repeatedly if we never use its output).
So how about something like this:
while check func state =
let next_state = func state in
if check next_state
then while check func next_state
else next_state
Lets try it in GHCi:
*Main> while (<20) (+1) 0
20
This is the result of applying (+1) repeatedly while the result is less than 20, starting from 0.
*Main> while ((<20) . length) (++ "zob") ""
"zobzobzobzobzobzobzob"
This is the result of concatenating "zob" repeatedly while the result's length is less than 20, starting from the empty string.
So you can see I've defined a function that is (sort of a bit) analogous to a while loop from imperative languages. We didn't even need dedicated loop syntax for it! (which is the real reason Haskell has no such syntax; if you need this kind of thing you can express it as a function). It's not the only way to do so, and experienced Haskell programmers would probably use other standard library functions to do this kind of job, rather than writing while.
But I think it's useful to see how you can express this kind of thing in Haskell. It does show that you can't translate things like imperative loops directly into Haskell; I didn't end up translating your loop in terms of my while because it ends up pretty pointless; you never use the result of function1 or function2, they're called with no arguments so they'd always return the same thing in every iteration, and function3 likewise always returns the same thing, and can only return true or false to either cause while to keep looping or stop, with no information resulting.
Presumably in the C++ program they're all using side effects to actually get some work done. If they operate on in-memory things then you need to translate a bigger chunk of your program at once to Haskell for the translation of this loop to make any sense. If those functions are doing IO then you'll need to do this in the IO monad in Haskell, for which my while function doesn't work, but you can do something similar.
[1] As an aside, it's worth trying to understand that "you can't change variables" in Haskell isn't just an arbitrary restriction, nor is it just an acceptable trade off for the benefits of purity, it is a concept that doesn't make sense the way Haskell wants you to think about Haskell code. You're writing down expressions that result from evaluating functions on certain arguments: in f x = x + 1 you're saying that f x is x + 1. If you really think of it that way rather than thinking "f takes x, then adds one to it, then returns the result" then the concept of "having side effects" doesn't even apply; how could something existing and being equal to something else somehow change a variable, or have some other side effect?
You should write a solution to your problem in a more functional approach.
However, some code in haskell works a lot like imperative looping, take for example state monads, terminal recursivity, until, foldr, etc.
A simple example is the factorial. In C, you would write a loop where in haskell you can for example write fact n = foldr (*) 1 [2..n].
If you've two functions f :: a -> b and g :: b -> c where a, b, and c are types like String or [Int] then you can compose them simply by writing f . b.
If you wish them to loop over a list or vector you could write map (f . g) or V.map (f . g), assuming you've done Import qualified Data.Vector as V.
Example : I wish to print a list of markdown headings like ## <number>. <heading> ## but I need roman numerals numbered from 1 and my list headings has type type [(String,Double)] where the Double is irrelevant.
Import Data.List
Import Text.Numeral.Roman
let fun = zipWith (\a b -> a ++ ". " ++ b ++ "##\n") (map toRoman [1..]) . map fst
fun [("Foo",3.5),("Bar",7.1)]
What the hell does this do?
toRoman turns a number into a string containing the roman numeral. map toRoman does this to every element of a loop. map toRoman [1..] does it to every element of the lazy infinite list [1,2,3,4,..], yielding a lazy infinite list of roman numeral strings
fst :: (a,b) -> a simply extracts the first element of a tuple. map fst throws away our silly Meow information along the entire list.
\a b -> "##" ++ show a ++ ". " ++ b ++ "##" is a lambda expression that takes two strings and concatenates them together within the desired formatting strings.
zipWith :: (a -> b -> c) -> [a] -> [b] -> [c] takes a two argument function like our lambda expression and feeds it pairs of elements from it's own second and third arguments.
You'll observe that zip, zipWith, etc. only read as much of the lazy infinite list of Roman numerals as needed for the list of headings, meaning I've number my headings without maintaining any counter variable.
Finally, I have declared fun without naming it's argument because the compiler can figure it out from the fact that map fst requires one argument. You'll notice that put a . before my second map too. I could've written (map fst h) or $ map fst h instead if I'd written fun h = ..., but leaving the argument off fun meant I needed to compose it with zipWith after applying zipWith to two arguments of the three arguments zipWith wants.
I'd hope the compiler combines the zipWith and maps into one single loop via inlining.

Resources