Related
I have a list of relations and I wish to print the names of all fathers. Since there's no else condition, the following code doesn't work:
relations = [("father", "arushi", "anandan"), ("mother", "arushi", "abigale"), ("father", "anandan", "ayuta"), ("mother", "anandan", "akanksha")]
father ((r, c, f):xs) = if r == "father" then print(f)
main = do
father (relations)
I do not wish to put any statement after else.
Too bad, all ifs come with elses. But that's okay, there's a distinguished do-nothing IO action.
father ((r, c, f):xs) = if r == "father" then print f else return ()
There are many other ways to skin this cat. One is pattern matching.
father (("father", c, f):xs) = print f
father ((r, c, f):xs) = return ()
Another that is specific to monadic actions is to use when.
father ((r, c, f):xs) = when (r == "father") (print f)
Of course, that's just hiding the else, which is again return ():
when p act = if p then act else pure () -- okay, it's actually pure, not return
The idiomatic Haskell way to solve such issues is to avoid mixing computation and I/O, when possible.
In this case, instead of "printing the names of all fathers", you can first "compute the names of all fathers" (no I/O here) and then "print the computed names" (I/O here)
relations =
[ ("father", "arushi", "anandan")
, ("mother", "arushi", "abigale")
, ("father", "anandan", "ayuta")
, ("mother", "anandan", "akanksha")
]
-- compute only the fathers
fathers = [ f | ("father", _, f) <- relations ]
-- print them
main :: IO ()
main = mapM_ putStrLn fathers
No if needed, since mapM_ iterates over the list for us, and all the list entries have to be printed.
Every if must have an else.
father ((r, c, f):xs) =
if r == "father"
then print f
else _what
If you try to compile that, you'll be informed that there's a hole
_what :: IO ()
So you need to manufacture something of that type. Fortunately, that's easy:
father ((r, c, f):xs) =
if r == "father"
then print f
else pure ()
pure x does nothing and returns x.
Since what you're trying to do is quite common, there are two functions specifically designed for the task:
when :: Applicative f => Bool -> f () -> f ()
when b m = if b then m else pure ()
unless :: Applicative f => Bool -> f () -> f ()
unless = when . not
You can find both of these functions in Control.Monad.
father ((r, c, f):xs) =
when (r == "father") $ print f
You can write a function that always writes the name, but then ensure it only gets called on values containing father.
relations :: [(String,String,String)]
relations = [("father", "arushi", "anandan")
,("mother", "arushi", "abigale")
,("father", "anandan", "ayuta")
,("mother", "anandan", "akanksha")
]
printName :: (String,String,String) -> IO ()
printName (_, _, name) = print name
printFathers :: [(String,String,String)] -> [IO ()]
printFathers = fmap printName . filter (\(f, _, _) -> f == "father")
main = sequence (printFathers relations)
The definition of filter hides the logic of skipping certain elements of the list. The argument to filter always returns either True or False, but the result of filter only contains those elements for which you want to call print.
(sequence, here, just turns the list of IO values into the single IO value that main must be by "swapping" IO and []. You could incorporate this into printName by defining it as sequence . fmap printName . ..., and replace sequence . fmap foo with traverse foo.)
Note that if foo then bar else baz is syntactic sugar for a complete case expression
case foo of
True -> foo
False -> baz
However, a case expression doesn't have to handle every possible value of the foo argument. You could write
father ((r, c, f):xs) = (case r of "father" -> print f) : father xs
It would be instructive, though, to see what happens when r doesn't match "father".
I feel the need to explain why an if must have an else in Haskell.
Haskell is an implementation of typed lambda calculus and in lambda calculus we have expressions and values nothing else.
In it we evaluate/reduce expressions to values or into expressions that can't be reduced any further.
Now in typed lambda calculus we add types and abstractions but we still to evaluate down to values and expressions one of these expressions being if predicate then value else value.
This if expression must reduce to a value therefore both branches of the if expression must reduce to values of the same type.
If we had an "if predicate then value" it means we would have a branch that doesn't reduce to a value.
you can use run, reduce and evaluate interchangeably in the context of this answer.
When we run Haskell code we are reducing lambda terms into values or expressions that can't be reduced any further.
The compiler exists to help us write valid lambda terms.
Going by lambda calculus we see that the if statement must, when evaluated, reduce to a value (or be capable of doing so) and because Haskell is implemented typed lambda calculus an if expression in Haskell without an else wouldn't have the possibility of evaluating down to a value all the time.
TL;DR
The "if ... then ... else" statement should when evaluated reduce to a value.
As long as both branches of the if statement evaluates to the same type it evaluates correctly.
If any branch doesn't evaluate to a value or are going to evaluate to values of different types that is not a valid lambda term and the code will not typecheck.
Was struggling with this style as well, & with Haskell formatting requirements. But I found that instead of placing emphasis on a required [else], one can use [else do] to include all following lines of code without an additional indention on concurrent lines, such as..
main = do
--if conditions exit main,
if t < 1 || t > 100000
then return ()
else
do
--code here, even with the else,
-- is only run when if condition above -> is false
Proof code can be in simpler form
if True
return ()
else
do
-- add code below this line to prove return works
I'm writing an interpreter for a small language.
This language supports mutation, so its evaluator keeps track of a Store for all the variables (where type Store = Map.Map Address Value, type Address = Int, and data Value is a language-specific ADT).
It's also possible for computations to fail (e.g., dividing by zero), so the result has to be an Either String Value.
The type of my interpreter, then, is
eval :: Environment -> Expression -> State Store (Either String Value)
where type Environment = Map.Map Identifier Address keeps track of local bindings.
For example, interpreting a constant literal doesn't need to touch the store, and the result always succeeds, so
eval _ (LiteralExpression v) = return $ Right v
But when we apply a binary operator, we do need to consider the store.
For example, if the user evaluates (+ (x <- (+ x 1)) (x <- (+ x 1))) and x is initially 0, then the final result should be 3, and x should be 2 in the resulting store.
This leads to the case
eval env (BinaryOperator op l r) = do
lval <- eval env l
rval <- eval env r
return $ join $ liftM2 (applyBinop op) lval rval
Note that the do-notation is working within the State Store monad.
Furthermore, the use of return is monomorphic in State Store, while the uses of join and liftM2 are monomorphic in the Either String monad.
That is, here we use
(return . join) :: Either String (Either String Value) -> State Store (Either String Value)
and return . join is not a no-op.
(As is evident, applyBinop :: Identifier -> Value -> Value -> Either String Value.)
This seems confusing at best, and this is a relatively simple case.
The case of function application, for example, is considerably more complicated.
What useful best practices should I know about to keep my code readable—and writable?
EDIT: Here's a more typical example, which better showcases the ugliness.
The NewArrayC variant has parameters length :: Expression and element :: Expression (it creates an array of a given length with all elements initialized to a constant).
A simple example is (newArray 3 "foo"), which yields ["foo", "foo", "foo"], but we could also write (newArray (+ 1 2) (concat "fo" "oo")), because we can have arbitrary expressions in a NewArrayC.
But when we actually call
allocateMany :: Int -> Value -> State Store Address,
which takes the number of elements to allocate and the value for each slot, and returns the starting address, we need to unpack those values.
In the logic below, you can see that I'm duplicating a bunch of logic that should be built-in to the Either monad.
All the cases should just be binds.
eval env (NewArrayC len el) = do
lenVal <- eval env len
elVal <- eval env el
case lenVal of
Right (NumV lenNum) -> case elVal of
Right val -> do
addr <- allocateMany lenNum val
return $ Right $ ArrayV addr lenNum -- result data type
left -> return left
Right _ -> return $ Left "expected number in new-array length"
left -> return left
This is what monad transformers are for. There is a StateT transformer to add state to a stack, and an EitherT transformer to add Either-like failure to a stack; however, I prefer ExceptT (which adds Except-like failure), so I will give my discussion in terms of that. Since you want the stateful bit outermost, you should use ExceptT e (State s) as your monad.
type DSL = ExceptT String (State Store)
Note that the stateful operations can be spelled get and put, and these are polymorphic over all instances of MonadState; so that in particular they will work okay in our DSL monad. Similarly, the canonical way to raise an error is throwError, which is polymorphic over all instances of MonadError String; and in particular will work okay in our DSL monad.
So now we would write
eval :: Environment -> Expression -> DSL Value
eval _ (Literal v) = return v
eval e (Binary op l r) = liftM2 (applyBinop op) (eval e l) (eval e r)
You might also consider giving eval a more polymorphic type; it could return an (MonadError String m, MonadState Store m) => m Value instead of a DSL Value. In fact, for allocateMany, it's important that you give it a polymorphic type:
allocateMany :: MonadState Store m => Int -> Value -> m Address
There's two pieces of interest about this type: first, because it is polymorphic over all MonadState Store m instances, you can be just as sure that it only has stateful side effects as if it had the type Int -> Value -> State Store Address that you suggested. However, also because it is polymorphic, it can be specialized to return a DSL Address, so it can be used in (for example) eval. Your example eval code becomes this:
eval env (NewArrayC len el) = do
lenVal <- eval env len
elVal <- eval env el
case lenVal of
NumV lenNum -> allocateMany lenNum elVal
_ -> throwError "expected number in new-array length"
I think that's quite readable, really; nothing too extraneous there.
I have a function that has a return type of Maybe ([(Int,Int)],(Int,Int))
I would like to call this from another function and perform an operation on the data.
However, the return value is contained within Just. The second method takes ([(Int,Int)],(Int,Int)) and therefore will not accept Just ([(Int,Int)],(Int,Int)).
Is there a way I can trim the Just before applying the second method?
I don't fully understand the use of Just within Maybe - however, I have been told that the return type for the first Method must be Maybe.
There are several solutions to your problem, all based around pattern matching. I'm assuming you have two algorithms (since you didn't name them, I will):
algorithm1 :: a -> Maybe b
algorithm2 :: b -> c
input :: a
1) Pattern matching is typically done from either a case statement (below) or a function.
let val = algorithm1 input
in case val of
Nothing -> defaultValue
Just x -> algorithm2 x
All other presented solutions use pattern matching, I'm just presenting standard functions that perform the pattern matching for you.
2) The prelude (and Data.Maybe) have some built-in functions to deal with Maybes. The maybe function is a great one, I suggest you use it. It's defined in standard libraries as:
maybe :: c -> (b -> c) -> Maybe b -> c
maybe n _ Nothing = n
maybe _ f (Just x) = f x
Your code would look like:
maybe defaultValue algorithm2 (algorithm1 input)
3) Since Maybe is a functor you could use fmap. This makes more sense if you don't have a default value. The definition:
instance Functor Maybe where
fmap _ Nothing = Nothing
fmap f (Just a) = Just (f a)
So your code would look like:
fmap algorithm2 (algorithm1 input)
This output will be a Maybe value (Nothing if the result of algorithm1 is Nothing).
4) Finally, and strongly discouraged, is fromJust. Only use it if you are positive the first algorithm will return Just x (and not Nothing). Be careful! If you call fromJust val when val = Nothing then you get an exception, which is not appreciated in Haskell. Its definition:
fromJust :: Maybe b -> b
fromJust Nothing = error "Maybe.fromJust: Nothing" -- yuck
fromJust (Just x) = x
Leaving your code to look like:
algorithm2 (fromJust (algorithm1 input))
You're looking for fromJust. But only if you're certain your Maybe function is not going to return a Nothing!
Since the following do block:
do
x <- foo
y <- bar
return x + y
is desugared to the following form:
foo >>= (\x -> bar >>= (\y -> return x + y))
aren't \x -> ... and y -> ... actually continuations here?
I was wondering if there is a way to capture the continuations in the definition of bind, but I can't get the types right. I.e:
data Pause a = Pause a | Stop
instance Monad Pause where
return x = Stop
m >>= k = Pause k -- this doesn't work of course
Now I tried muddling around with the types:
data Pause a = Pause a (a -> Pause ???) | Stop
------- k ------
But this doesn't work either. Is there no way to capture these implicit continuations?
Btw, I know about the Cont monad, I'm just experimenting and trying out stuff.
OK I'm not really sure, but let me put a few thoughts. I'm not quite sure what it
ought to mean to capture the continuation. You could, for example, capture the whole
do block in a structure:
{-# LANGUAGE ExistentialQuantification #-}
import Control.Monad
data MonadExp b = Return b | forall a. Bind (MonadExp a) (a -> MonadExp b)
instance Monad MonadExp where
return x = Return x
f >>= g = Bind f g
For example:
block :: MonadExp Int
block = do
x <- return 1
y <- return 2
return $ x + y
instance Show (MonadExp a) where
show (Return _) = "Return _"
show (Bind _ _) = "Bind _ _"
print block
>> Bind _ _
And then evaluate the whole thing:
finish :: MonadExp a -> a
finish (Return x) = x
finish (Bind f g) = finish $ g (finish f)
print $ finish block
>> 3
Or step through it and see the parts
step :: MonadExp a -> MonadExp a
step (Return _) = error "At the end"
step (Bind f g) = g $ finish f
print $ step block
>> Bind _ _
print $ step $ step block
>> Return _
Well, now that I think about it more, that's probably not what you're asking. But
maybe it'll help you think.
Well, I don't know if your lambdas are continuations in the strictest sense of the term, but they also look similar to this concept in my eye.
But note that if they are continuations, then the desugared monadic code is already written in continuation passing style (CPS). The usual notion of a control operator that "captures" a continuation is based on direct-style programs; the "captured" continuation is only implicit in the direct-style program, but the CPS transformation makes it explicit.
Since the desugared monadic code is already in CPS or something like it, well, maybe a way to frame your question is whether monadic code can express some of the control flow tricks that CPS code can. Typically, those tricks boil down to the idea that while under the CPS regime it is conventional for a function to finish by invoking its continuation, a function can choose to replace its continuation with another of its choosing. This replacement continuation can be constructed with a reference to the original continuation, so that it can in turn "restore" that original one if it chooses. So for example, coroutines are implemented as a mutual "replace/restore" cycle.
And looked at in this light, I think your answer is mostly no; CPS requires that in foo >>= bar, foo must be able to choose whether bar will be invoked at all, and foo must be abble to supply a substitute for bar, but (>>=) by itself does not offer a mechanism for foo to do this, and more importantly, (>>=) is in control of the execution flow, not foo. Some specific monads implement parts or all of it (for example, the Maybe monad allows foo to forego execution of bar by producing a Nothing result), but others don't.
Closest I could get is to forego (>>=) and use this instead:
-- | Execute action #foo# with its "continuation" #bar#.
callCC :: Monad m => ((a -> m b) -> m b) -> (a -> m b) -> m b
foo `callCC` bar = foo bar
Here foo can choose whether bar will be used at all. But notice that this callCC is really just ($)!
Here's a simple, barebones example of how the code that I'm trying to do would look in C++.
while (state == true) {
a = function1();
b = function2();
state = function3();
}
In the program I'm working on, I have some functions that I need to loop through until bool state equals false (or until one of the variables, let's say variable b, equals 0).
How would this code be done in Haskell? I've searched through here, Google, and even Bing and haven't been able to find any clear, straight forward explanations on how to do repetitive actions with functions.
Any help would be appreciated.
Taking Daniels comment into account, it could look something like this:
f = loop init_a init_b true
where
loop a b True = loop a' b' (fun3 a' b')
where
a' = fun1 ....
b' = fun2 .....
loop a b False = (a,b)
Well, here's a suggestion of how to map the concepts here:
A C++ loop is some form of list operation in Haskell.
One iteration of the loop = handling one element of the list.
Looping until a certain condition becomes true = base case of a function that recurses on a list.
But there is something that is critically different between imperative loops and functional list functions: loops describe how to iterate; higher-order list functions describe the structure of the computation. So for example, map f [a0, a1, ..., an] can be described by this diagram:
[a0, a1, ..., an]
| | |
f f f
| | |
v v v
[f a0, f a1, ..., f an]
Note that this describes how the result is related to the arguments f and [a0, a1, ..., an], not how the iteration is performed step by step.
Likewise, foldr f z [a0, a1, ..., an] corresponds to this:
f a0 (f a1 (... (f an z)))
filter doesn't quite lend itself to diagramming, but it's easy to state many rules that it satisfies:
length (filter pred xs) <= length xs
For every element x of filter pred xs, pred x is True.
If x is an element of filter pred xs, then x is an element of xs
If x is not an element of xs, then x is not an element of filter pred xs
If x appears before x' in filter pred xs, then x appears before x' in xs
If x appears before x' in xs, and both x and x' appear in filter pred xs, then x appears before x' in filter pred xs
In a classic imperative program, all three of these cases are written as loops, and the difference between them comes down to what the loop body does. Functional programming, on the contrary, insists that this sort of structural pattern does not belong in "loop bodies" (the functions f and pred in these examples); rather, these patterns are best abstracted out into higher-order functions like map, foldr and filter. Thus, every time you see one of these list functions you instantly know some important facts about how the arguments and the result are related, without having to read any code; whereas in a typical imperative program, you must read the bodies of loops to figure this stuff out.
So the real answer to your question is that it's impossible to offer an idiomatic translation of an imperative loop into functional terms without knowing what the loop body is doing—what are the preconditions supposed to be before the loop runs, and what the postconditions are supposed to be when the loop finishes. Because that loop body that you only described vaguely is going to determine what the structure of the computation is, and different such structures will call for different higher-order functions in Haskell.
First of all, let's think about a few things.
Does function1 have side effects?
Does function2 have side effects?
Does function3 have side effects?
The answer to all of these is a resoundingly obvious YES, because they take no inputs, and presumably there are circumstances which cause you to go around the while loop more than once (rather than def function3(): return false). Now let's remodel these functions with explicit state.
s = initialState
sentinel = true
while(sentinel):
a,b,s,sentinel = function1(a,b,s,sentinel)
a,b,s,sentinel = function2(a,b,s,sentinel)
a,b,s,sentinel = function3(a,b,s,sentinel)
return a,b,s
Well that's rather ugly. We know absolutely nothing about what inputs each function draws from, nor do we know anything about how these functions might affect the variables a, b, and sentinel, nor "any other state" which I have simply modeled as s.
So let's make a few assumptions. Firstly, I am going to assume that these functions do not directly depend on nor affect in any way the values of a, b, and sentinel. They might, however, change the "other state". So here's what we get:
s = initState
sentinel = true
while (sentinel):
a,s2 = function1(s)
b,s3 = function2(s2)
sentinel,s4 = function(s3)
s = s4
return a,b,s
Notice I've used temporary variables s2, s3, and s4 to indicate the changes that the "other state" goes through. Haskell time. We need a control function to behave like a while loop.
myWhile :: s -- an initial state
-> (s -> (Bool, a, s)) -- given a state, produces a sentinel, a current result, and the next state
-> (a, s) -- the result, plus resultant state
myWhile s f = case f s of
(False, a, s') -> (a, s')
(True, _, s') -> myWhile s' f
Now how would one use such a function? Well, given we have the functions:
function1 :: MyState -> (AType, MyState)
function2 :: MyState -> (BType, MyState)
function3 :: MyState -> (Bool, MyState)
We would construct the desired code as follows:
thatCodeBlockWeAreTryingToSimulate :: MyState -> ((AType, BType), MyState)
thatCodeBlockWeAreTryingToSimulate initState = myWhile initState f
where f :: MyState -> (Bool, (AType, BType), MyState)
f s = let (a, s2) = function1 s
(b, s3) = function2 s2
(sentinel, s4) = function3 s3
in (sentinel, (a, b), s4)
Notice how similar this is to the non-ugly python-like code given above.
You can verify that the code I have presented is well-typed by adding function1 = undefined etc for the three functions, as well as the following at the top of the file:
{-# LANGUAGE EmptyDataDecls #-}
data MyState
data AType
data BType
So the takeaway message is this: in Haskell, you must explicitly model the changes in state. You can use the "State Monad" to make things a little prettier, but you should first understand the idea of passing state around.
Lets take a look at your C++ loop:
while (state == true) {
a = function1();
b = function2();
state = function3();
}
Haskell is a pure functional language, so it won't fight us as much (and the resulting code will be more useful, both in itself and as an exercise to learn Haskell) if we try to do this without side effects, and without using monads to make it look like we're using side effects either.
Lets start with this structure
while (state == true) {
<<do stuff that updates state>>
}
In Haskell we're obviously not going to be checking a variable against true as the loop condition, because it can't change its value[1] and we'd either evaluate the loop body forever or never. So instead, we'll want to be evaluating a function that returns a boolean value on some argument:
while (check something == True) {
<<do stuff that updates state>>
}
Well, now we don't have a state variable, so that "do stuff that updates state" is looking pretty pointless. And we don't have a something to pass to check. Lets think about this a bit more. We want the something to be checked to depend on what the "do stuff" bit is doing. We don't have side effects, so that means something has to be (or be derived from) returned from the "do stuff". "do stuff" also needs to take something that varies as an argument, or it'll just keep returning the same thing forever, which is also pointless. We also need to return a value out all this, otherwise we're just burning CPU cycles (again, with no side effects there's no point running a function if we don't use its output in some way, and there's even less point running a function repeatedly if we never use its output).
So how about something like this:
while check func state =
let next_state = func state in
if check next_state
then while check func next_state
else next_state
Lets try it in GHCi:
*Main> while (<20) (+1) 0
20
This is the result of applying (+1) repeatedly while the result is less than 20, starting from 0.
*Main> while ((<20) . length) (++ "zob") ""
"zobzobzobzobzobzobzob"
This is the result of concatenating "zob" repeatedly while the result's length is less than 20, starting from the empty string.
So you can see I've defined a function that is (sort of a bit) analogous to a while loop from imperative languages. We didn't even need dedicated loop syntax for it! (which is the real reason Haskell has no such syntax; if you need this kind of thing you can express it as a function). It's not the only way to do so, and experienced Haskell programmers would probably use other standard library functions to do this kind of job, rather than writing while.
But I think it's useful to see how you can express this kind of thing in Haskell. It does show that you can't translate things like imperative loops directly into Haskell; I didn't end up translating your loop in terms of my while because it ends up pretty pointless; you never use the result of function1 or function2, they're called with no arguments so they'd always return the same thing in every iteration, and function3 likewise always returns the same thing, and can only return true or false to either cause while to keep looping or stop, with no information resulting.
Presumably in the C++ program they're all using side effects to actually get some work done. If they operate on in-memory things then you need to translate a bigger chunk of your program at once to Haskell for the translation of this loop to make any sense. If those functions are doing IO then you'll need to do this in the IO monad in Haskell, for which my while function doesn't work, but you can do something similar.
[1] As an aside, it's worth trying to understand that "you can't change variables" in Haskell isn't just an arbitrary restriction, nor is it just an acceptable trade off for the benefits of purity, it is a concept that doesn't make sense the way Haskell wants you to think about Haskell code. You're writing down expressions that result from evaluating functions on certain arguments: in f x = x + 1 you're saying that f x is x + 1. If you really think of it that way rather than thinking "f takes x, then adds one to it, then returns the result" then the concept of "having side effects" doesn't even apply; how could something existing and being equal to something else somehow change a variable, or have some other side effect?
You should write a solution to your problem in a more functional approach.
However, some code in haskell works a lot like imperative looping, take for example state monads, terminal recursivity, until, foldr, etc.
A simple example is the factorial. In C, you would write a loop where in haskell you can for example write fact n = foldr (*) 1 [2..n].
If you've two functions f :: a -> b and g :: b -> c where a, b, and c are types like String or [Int] then you can compose them simply by writing f . b.
If you wish them to loop over a list or vector you could write map (f . g) or V.map (f . g), assuming you've done Import qualified Data.Vector as V.
Example : I wish to print a list of markdown headings like ## <number>. <heading> ## but I need roman numerals numbered from 1 and my list headings has type type [(String,Double)] where the Double is irrelevant.
Import Data.List
Import Text.Numeral.Roman
let fun = zipWith (\a b -> a ++ ". " ++ b ++ "##\n") (map toRoman [1..]) . map fst
fun [("Foo",3.5),("Bar",7.1)]
What the hell does this do?
toRoman turns a number into a string containing the roman numeral. map toRoman does this to every element of a loop. map toRoman [1..] does it to every element of the lazy infinite list [1,2,3,4,..], yielding a lazy infinite list of roman numeral strings
fst :: (a,b) -> a simply extracts the first element of a tuple. map fst throws away our silly Meow information along the entire list.
\a b -> "##" ++ show a ++ ". " ++ b ++ "##" is a lambda expression that takes two strings and concatenates them together within the desired formatting strings.
zipWith :: (a -> b -> c) -> [a] -> [b] -> [c] takes a two argument function like our lambda expression and feeds it pairs of elements from it's own second and third arguments.
You'll observe that zip, zipWith, etc. only read as much of the lazy infinite list of Roman numerals as needed for the list of headings, meaning I've number my headings without maintaining any counter variable.
Finally, I have declared fun without naming it's argument because the compiler can figure it out from the fact that map fst requires one argument. You'll notice that put a . before my second map too. I could've written (map fst h) or $ map fst h instead if I'd written fun h = ..., but leaving the argument off fun meant I needed to compose it with zipWith after applying zipWith to two arguments of the three arguments zipWith wants.
I'd hope the compiler combines the zipWith and maps into one single loop via inlining.