Would you ever write seq x x? - haskell

I'm not entirely clear on how seq works in Haskell.
It seems like it there are lots of cases where it would be useful to write
seq x x
and maybe even define a function:
strict x = seq x x
but such a function doesn't already exist so I'm guessing this approach is somehow wrongheaded. Could someone tell me if this is meaningful or useful?

seq a b returns the value of b, but makes that value depend on the evaluation of a. Thus, seq a a is exactly the same thing as a.
I think the misunderstanding here is that seq doesn't take any action, because pure functions don't take actions, it just introduces a dependency.
There is a function evaluate :: a -> IO () in Control.Exception that does what you want (note that it's in IO). They put it in exception because it's useful to see if the evaluation of an expression would throw, and if so handle the exception.

The expression x = seq a b means that if x is evaluated, then a will also be evaluated (but x will be equal to b).
It does not mean "evaluate a now".
Notice that if x is being evaluated, then since x equals b, then b will also be evaluated.
And hence, if I write x = seq a a, I am saying "if x is evaluated then evaluate a". But if I just do x = a, that would achieve exactly the same thing.

When you say seq a b what you are telling the computer is,
Whenever you need to evaluate b, evaluate a for me too, please.
If we replace both a and b with x you can see why it's useless to write seq x x:
Whenever you need to evaluate x, evaluate x for me too, please.
Asking the computer to evaluate x when it needs to evaluate x is just a useless thing to do – it was going to evaluate x anyway!
seq does not evaluate anything – it simply tells the computer that when you need the second argument, also evaluate the first argument. Understanding this is actually really important, because it allows you to understand the behaviour of your programs much better.

seq x x would be entirely, trivially redundant.
Remember, seq is not a command. The presence of a seq a b in your program does not force evaluation of a or b What it does do, is it makes the evaluation of the result artificially dependent on the evaluation of a, even though the result itself is b If you print out seq a b, a will be evaluated and its result discarded.. Since x already depends on itself, seq x x is silly.

Close! deepseq (which is the "more thorough" seq -- see the docs for a full description) has the type NFData a => a -> b -> b, and force (with type NFData a => a -> a) is defined simply as
force :: (NFData a) => a -> a
force x = x `deepseq` x

Related

Can pseq be defined in terms of seq?

As far as I know, seq a b evaluates (forces) a and b before returning b. It does not guarantee that a is evaluated first.
pseq a b evaluates a first, then evaluates/returns b.
Now consider the following:
xseq a b = (seq a id) b
Function application needs to evaluate the left operand first (to get a lambda form), and it can't blindly evaluate the right operand before entering the function because that would violate Haskell's non-strict semantics.
Therefore (seq a id) b must evaluate seq a id first, which forces a and id (in some unspecified order (but evaluating id does nothing)), then returns id b (which is b); therefore xseq a b evaluates a before b.
Is xseq a valid implementation of pseq? If not, what's wrong with the above argument (and is it possible to define pseq in terms of seq at all)?
The answer seems to be "no, at least not without additional magic".
The problem with
xseq a b = (seq a id) b
is that the compiler can see that the result of seq a id is id, which is strict everywhere. Function application is allowed to evaluate the argument first if the function is strict, because then doing so does not change the semantics of the expression. Therefore an optimizing compiler could start evaluating b first because it knows it will eventually need it.
Can pseq be defined in terms of seq?
In GHC - yes.
As noted by Alec, you'll also need the mirror-smoke lazy:
-- for GHC 8.6.5
import Prelude(seq)
import GHC.Base(lazy)
infixr 0 `pseq`
pseq :: a -> b -> b
pseq x y = x `seq` lazy y
the definition matching its counterpart in the GHC sources; the imports are
very different.
For other Haskell implementations, this may work:
import Prelude(seq)
infixr 0 `pseq`
pseq :: a -> b -> b
pseq x y = x `seq` (case x of _ -> y)
possibly in conjunction with - at the very least - the equivalent of:
-- for GHC 8.6.5
{-# NOINLINE pseq #-}
I'll let melpomene decide if that also qualifies as mirror-smoke...

Memoizing multiplication

My application multiplies vectors after a (costly) conversion using an FFT. As a result, when I write
f :: (Num a) => a -> [a] -> [a]
f c xs = map (c*) xs
I only want to compute the FFT of c once, rather than for every element of xs. There really isn't any need to store the FFT of c for the entire program, just in the local scope.
I attempted to define my Num instance like:
data Foo = Scalar c
| Vec Bool v -- the bool indicates which domain v is in
instance Num Foo where
(*) (Scalar c) = \x -> case x of
Scalar d -> Scalar (c*d)
Vec b v-> Vec b $ map (c*) v
(*) v1 = let Vec True v = fft v1
in \x -> case x of
Scalar d -> Vec True $ map (c*) v
v2 -> Vec True $ zipWith (*) v (fft v2)
Then, in an application, I call a function similar to f (which works on arbitrary Nums) where c=Vec False v, and I expected that this would be just as fast as if I hack f to:
g :: Foo -> [Foo] -> [Foo]
g c xs = let c' = fft c
in map (c'*) xs
The function g makes the memoization of fft c occur, and is much faster than calling f (no matter how I define (*)). I don't understand what is going wrong with f. Is it my definition of (*) in the Num instance? Does it have something to do with f working over all Nums, and GHC therefore being unable to figure out how to partially compute (*)?
Note: I checked the core output for my Num instance, and (*) is indeed represented as nested lambdas with the FFT conversion in the top level lambda. So it looks like this is at least capable of being memoized. I have also tried both judicious and reckless use of bang patterns to attempt to force evaluation to no effect.
As a side note, even if I can figure out how to make (*) memoize its first argument, there is still another problem with how it is defined: A programmer wanting to use the Foo data type has to know about this memoization capability. If she wrote
map (*c) xs
no memoization would occur. (It must be written as (map (c*) xs)) Now that I think about it, I'm not entirely sure how GHC would rewrite the (*c) version since I have curried (*). But I did a quick test to verify that both (*c) and (c*) work as expected: (c*) makes c the first arg to *, while (*c) makes c the second arg to *. So the problem is that it is not obvious how one should write the multiplication to ensure memoization. Is this just an inherent downside to the infix notation (and the implicit assumption that the arguments to * are symmetric)?
The second, less pressing issue is that the case where we map (v*) onto a list of scalars. In this case, (hopefully) the fft of v would be computed and stored, even though it is unnecessary since the other multiplicand is a scalar. Is there any way around this?
Thanks
I believe stable-memo package could solve your problem. It memoizes values not using equality but by reference identity:
Whereas most memo combinators memoize based on equality, stable-memo does it based on whether the exact same argument has been passed to the function before (that is, is the same argument in memory).
And it automatically drops memoized values when their keys are garbage collected:
stable-memo doesn't retain the keys it has seen so far, which allows them to be garbage collected if they will no longer be used. Finalizers are put in place to remove the corresponding entries from the memo table if this happens.
So if you define something like
fft = memo fft'
where fft' = ... -- your old definition
you'll get pretty much what you need: Calling map (c *) xs will memoize the computation of fft inside the first call to (*) and it gets reused on subsequent calls to (c *). And if c is garbage collected, so is fft' c.
See also this answer to How to add fields that only cache something to ADT?
I can see two problems that might prevent memoization:
First, f has an overloaded type and works for all Num instances. So f cannot use memoization unless it is either specialized (which usually requires a SPECIALIZE pragma) or inlined (which may happen automatically, but is more reliable with an INLINE pragma).
Second, the definition of (*) for Foo performs pattern matching on the first argument, but f multiplies with an unknown c. So within f, even if specialized, no memoization can occur. Once again, it very much depends on f being inlined, and a concrete argument for c to be supplied, so that inlining can actually appear.
So I think it'd help to see how exactly you're calling f. Note that if f is defined using two arguments, it has to be given two arguments, otherwise it cannot be inlined. It would furthermore help to see the actual definition of Foo, as the one you are giving mentions c and v which aren't in scope.

the seq function and strictness

I have been wondering about this a lot, but I haven't been able to find anything about it.
When using the seq function, how does it then really work? Everywhere, it is just explained saying that seq a b evaluates a, discards the result and returns b.
But what does that really mean? Would the following result in strict evaluation:
foo s t = seq q (bar q t) where
q = s*t
What I mean is, is q strictly evaluated before being used in bar? And would the following be equivalent:
foo s t = seq (s*t) (bar (s*t) t)
I find it a little hard getting specifics on the functionality of this function.
You're not alone. seq is probably one of the most difficult Haskell functions to use properly, for a few different reasons. In your first example:
foo s t = seq q (bar q t) where
q = s*t
q is evaluated before bar q t is evaluated. If bar q t is never evaluated, q won't be either. So if you have
main = do
let val = foo 10 20
return ()
as val is never used, it won't be evaluated. So q won't be evaluated either. If you instead have
main = print (foo 10 20)
the result of foo 10 20 is evaluated (by print), so within foo q is evaluated before the result of bar.
This is also why this doesn't work:
myseq x = seq x x
Semantically, this means the first x will be evaluated before the second x is evaluated. But if the second x is never evaluated, the first one doesn't need to be either. So seq x x is exactly equivalent to x.
Your second example may or may not be the same thing. Here, the expression s*t will be evaluated before bar's output, but it may not be the same s*t as the first parameter to bar. If the compiler performs common sub-expression elimination, it may common-up the two identical expressions. GHC can be fairly conservative about where it does CSE though, so you can't rely on this. If I define bar q t = q*t it does perform the CSE and evaluate s*t before using that value in bar. It may not do so for more complex expressions.
You might also want to know what is meant by strict evaluation. seq evaluates the first argument to weak head normal form (WHNF), which for data types means unpacking the outermost constructor. Consider this:
baz xs y = seq xs (map (*y) xs)
xs must be a list, because of map. When seq evaluates it, it will essentially transform the code into
case xs of
[] -> map (*y) xs
(_:_) -> map (*y) xs
This means it will determine if the list is empty or not, then return the second argument. Note that none of the list values are evaluated. So you can do this:
Prelude> seq [undefined] 4
4
but not this
Prelude> seq undefined 5
*** Exception: Prelude.undefined
Whatever data type you use for seqs first argument, evaluating to WHNF will go far enough to figure out the constructor and no further. Unless the data type has components that are marked as strict with a bang pattern. Then all the strict fields will also be evaluated to WHNF.
Edit: (thanks to Daniel Wagner for suggestion in comments)
For functions, seq will evaluate the expression until the function "has a lambda showing", which means that it's ready for application. Here are some examples that might demonstrate what this means:
-- ok, lambda is outermost
Prelude> seq (\x -> undefined) 'a'
'a'
-- not ok. Because of the inner seq, `undefined` must be evaluated before
-- the lambda is showing
Prelude> seq (seq undefined (\x -> x)) 'b'
*** Exception: Prelude.undefined
If you think of a lambda binding as a (built-in) data constructor, seq on functions is perfectly consistent with using it on data.
Also, "lambda binding" subsumes all types of function definitions, whether defined by lambda notation or as a normal function.
The Controversy section of the HaskellWiki's seq page has a little about some of the consequences of seq in relation to functions.
You can think of seq as:
seq a b = case a of
_ -> b
This will evaluate a to head-normal form (WHNF) and then continue with evaluating b.
Edit after augustss comment: this case ... of is the strict, GHC Core one, which always forces its argument.

Evaluation strategy

How should one reason about function evaluation in examples like the following in Haskell:
let f x = ...
x = ...
in map (g (f x)) xs
In GHC, sometimes (f x) is evaluated only once, and sometimes once for each element in xs, depending on what exactly f and g are. This can be important when f x is an expensive computation. It has just tripped a Haskell beginner I was helping and I didn't know what to tell him other than that it is up to the compiler. Is there a better story?
Update
In the following example (f x) will be evaluated 4 times:
let f x = trace "!" $ zip x x
x = "abc"
in map (\i -> lookup i (f x)) "abcd"
With language extensions, we can create situations where f x must be evaluated repeatedly:
{-# LANGUAGE GADTs, Rank2Types #-}
module MultiEvG where
data BI where
B :: (Bounded b, Integral b) => b -> BI
foo :: [BI] -> [Integer]
foo xs = let f :: (Integral c, Bounded c) => c -> c
f x = maxBound - x
g :: (forall a. (Integral a, Bounded a) => a) -> BI -> Integer
g m (B y) = toInteger (m + y)
x :: (Integral i) => i
x = 3
in map (g (f x)) xs
The crux is to have f x polymorphic even as the argument of g, and we must create a situation where the type(s) at which it is needed can't be predicted (my first stab used an Either a b instead of BI, but when optimising, that of course led to only two evaluations of f x at most).
A polymorphic expression must be evaluated at least once for each type it is used at. That's one reason for the monomorphism restriction. However, when the range of types it can be needed at is restricted, it is possible to memoise the values at each type, and in some circumstances GHC does that (needs optimising, and I expect the number of types involved mustn't be too large). Here we confront it with what is basically an inhomogeneous list, so in each invocation of g (f x), it can be needed at an arbitrary type satisfying the constraints, so the computation cannot be lifted outside the map (technically, the compiler could still build a cache of the values at each used type, so it would be evaluated only once per type, but GHC doesn't, in all likelihood it wouldn't be worth the trouble).
Monomorphic expressions need only be evaluated once, they can be shared. Whether they are is up to the implementation; by purity, it doesn't change the semantics of the programme. If the expression is bound to a name, in practice you can rely on it being shared, since it's easy and obviously what the programmer wants. If it isn't bound to a name, it's a question of optimisation. With the bytecode generator or without optimisations, the expression will often be evaluated repeatedly, but with optimisations repeated evaluation would indicate a compiler bug.
Polymorphic expressions must be evaluated at least once for every type they're used at, but with optimisations, when GHC can see that it may be used multiple times at the same type, it will (usually) still be shared for that type during a larger computation.
Bottom line: Always compile with optimisations, help the compiler by binding expressions you want shared to a name, and give monomorphic type signatures where possible.
Your examples are indeed quite different.
In the first example, the argument to map is g (f x) and is passed once to map most likely as partially applied function.
Should g (f x), when applied to an argument within map evaluate its first argument, then this will be done only once and then the thunk (f x) will be updated with the result.
Hence, in your first example, f xwill be evaluated at most 1 time.
Your second example requires a deeper analysis before the compiler can arrive at the conclusion that (f x) is always constant in the lambda expression. Perhaps it will never optimize it at all, because it may have knowledge that trace is not quite kosher. So, this may evaluate 4 times when tracing, and 4 times or 1 time when not tracing.
This is really dependent on GHC's optimizations, as you've been able to tell.
The best thing to do is to study the GHC core that you get after optimizing the program. I would look at the generated Core and examine whether f x had its own let statement outside the map or not.
If you want to be sure, then you should factor f x out into its own variable assigned in a let, but there's not really a guaranteed way to figure it out other than reading through Core.
All that said, with the exception of things like trace that use unsafePerformIO, this will never change the semantics of your program: how it actually behaves.
In GHC without optimizations, the body of a function is evaluated every time the function is called. (A "call" means the function is applied to arguments and the result is evaluated.) In the following example, f x is inside a function, so it will execute each time the function is called.
(GHC may optimize this expression as discussed in the FAQ [1].)
let f x = trace "!" $ zip x x
x = "abc"
in map (\i -> lookup i (f x)) "abcd"
However, if we move f x out of the function, it will execute only once.
let f x = trace "!" $ zip x x
x = "abc"
in map ((\f_x i -> lookup i f_x) (f x)) "abcd"
This can be rewritten more readably as
let f x = trace "!" $ zip x x
x = "abc"
g f_x i = lookup i f_x
in map (g (f x)) "abcd"
The general rule is that, each time a function is applied to an argument, a new "copy" of the function body is created. Function application is the only thing that may cause an expression to re-execute. However, be warned that some functions and function calls do not look like functions syntactically.
[1] http://www.haskell.org/haskellwiki/GHC/FAQ#Subexpression_Elimination

Loop through a set of functions with Haskell

Here's a simple, barebones example of how the code that I'm trying to do would look in C++.
while (state == true) {
a = function1();
b = function2();
state = function3();
}
In the program I'm working on, I have some functions that I need to loop through until bool state equals false (or until one of the variables, let's say variable b, equals 0).
How would this code be done in Haskell? I've searched through here, Google, and even Bing and haven't been able to find any clear, straight forward explanations on how to do repetitive actions with functions.
Any help would be appreciated.
Taking Daniels comment into account, it could look something like this:
f = loop init_a init_b true
where
loop a b True = loop a' b' (fun3 a' b')
where
a' = fun1 ....
b' = fun2 .....
loop a b False = (a,b)
Well, here's a suggestion of how to map the concepts here:
A C++ loop is some form of list operation in Haskell.
One iteration of the loop = handling one element of the list.
Looping until a certain condition becomes true = base case of a function that recurses on a list.
But there is something that is critically different between imperative loops and functional list functions: loops describe how to iterate; higher-order list functions describe the structure of the computation. So for example, map f [a0, a1, ..., an] can be described by this diagram:
[a0, a1, ..., an]
| | |
f f f
| | |
v v v
[f a0, f a1, ..., f an]
Note that this describes how the result is related to the arguments f and [a0, a1, ..., an], not how the iteration is performed step by step.
Likewise, foldr f z [a0, a1, ..., an] corresponds to this:
f a0 (f a1 (... (f an z)))
filter doesn't quite lend itself to diagramming, but it's easy to state many rules that it satisfies:
length (filter pred xs) <= length xs
For every element x of filter pred xs, pred x is True.
If x is an element of filter pred xs, then x is an element of xs
If x is not an element of xs, then x is not an element of filter pred xs
If x appears before x' in filter pred xs, then x appears before x' in xs
If x appears before x' in xs, and both x and x' appear in filter pred xs, then x appears before x' in filter pred xs
In a classic imperative program, all three of these cases are written as loops, and the difference between them comes down to what the loop body does. Functional programming, on the contrary, insists that this sort of structural pattern does not belong in "loop bodies" (the functions f and pred in these examples); rather, these patterns are best abstracted out into higher-order functions like map, foldr and filter. Thus, every time you see one of these list functions you instantly know some important facts about how the arguments and the result are related, without having to read any code; whereas in a typical imperative program, you must read the bodies of loops to figure this stuff out.
So the real answer to your question is that it's impossible to offer an idiomatic translation of an imperative loop into functional terms without knowing what the loop body is doing—what are the preconditions supposed to be before the loop runs, and what the postconditions are supposed to be when the loop finishes. Because that loop body that you only described vaguely is going to determine what the structure of the computation is, and different such structures will call for different higher-order functions in Haskell.
First of all, let's think about a few things.
Does function1 have side effects?
Does function2 have side effects?
Does function3 have side effects?
The answer to all of these is a resoundingly obvious YES, because they take no inputs, and presumably there are circumstances which cause you to go around the while loop more than once (rather than def function3(): return false). Now let's remodel these functions with explicit state.
s = initialState
sentinel = true
while(sentinel):
a,b,s,sentinel = function1(a,b,s,sentinel)
a,b,s,sentinel = function2(a,b,s,sentinel)
a,b,s,sentinel = function3(a,b,s,sentinel)
return a,b,s
Well that's rather ugly. We know absolutely nothing about what inputs each function draws from, nor do we know anything about how these functions might affect the variables a, b, and sentinel, nor "any other state" which I have simply modeled as s.
So let's make a few assumptions. Firstly, I am going to assume that these functions do not directly depend on nor affect in any way the values of a, b, and sentinel. They might, however, change the "other state". So here's what we get:
s = initState
sentinel = true
while (sentinel):
a,s2 = function1(s)
b,s3 = function2(s2)
sentinel,s4 = function(s3)
s = s4
return a,b,s
Notice I've used temporary variables s2, s3, and s4 to indicate the changes that the "other state" goes through. Haskell time. We need a control function to behave like a while loop.
myWhile :: s -- an initial state
-> (s -> (Bool, a, s)) -- given a state, produces a sentinel, a current result, and the next state
-> (a, s) -- the result, plus resultant state
myWhile s f = case f s of
(False, a, s') -> (a, s')
(True, _, s') -> myWhile s' f
Now how would one use such a function? Well, given we have the functions:
function1 :: MyState -> (AType, MyState)
function2 :: MyState -> (BType, MyState)
function3 :: MyState -> (Bool, MyState)
We would construct the desired code as follows:
thatCodeBlockWeAreTryingToSimulate :: MyState -> ((AType, BType), MyState)
thatCodeBlockWeAreTryingToSimulate initState = myWhile initState f
where f :: MyState -> (Bool, (AType, BType), MyState)
f s = let (a, s2) = function1 s
(b, s3) = function2 s2
(sentinel, s4) = function3 s3
in (sentinel, (a, b), s4)
Notice how similar this is to the non-ugly python-like code given above.
You can verify that the code I have presented is well-typed by adding function1 = undefined etc for the three functions, as well as the following at the top of the file:
{-# LANGUAGE EmptyDataDecls #-}
data MyState
data AType
data BType
So the takeaway message is this: in Haskell, you must explicitly model the changes in state. You can use the "State Monad" to make things a little prettier, but you should first understand the idea of passing state around.
Lets take a look at your C++ loop:
while (state == true) {
a = function1();
b = function2();
state = function3();
}
Haskell is a pure functional language, so it won't fight us as much (and the resulting code will be more useful, both in itself and as an exercise to learn Haskell) if we try to do this without side effects, and without using monads to make it look like we're using side effects either.
Lets start with this structure
while (state == true) {
<<do stuff that updates state>>
}
In Haskell we're obviously not going to be checking a variable against true as the loop condition, because it can't change its value[1] and we'd either evaluate the loop body forever or never. So instead, we'll want to be evaluating a function that returns a boolean value on some argument:
while (check something == True) {
<<do stuff that updates state>>
}
Well, now we don't have a state variable, so that "do stuff that updates state" is looking pretty pointless. And we don't have a something to pass to check. Lets think about this a bit more. We want the something to be checked to depend on what the "do stuff" bit is doing. We don't have side effects, so that means something has to be (or be derived from) returned from the "do stuff". "do stuff" also needs to take something that varies as an argument, or it'll just keep returning the same thing forever, which is also pointless. We also need to return a value out all this, otherwise we're just burning CPU cycles (again, with no side effects there's no point running a function if we don't use its output in some way, and there's even less point running a function repeatedly if we never use its output).
So how about something like this:
while check func state =
let next_state = func state in
if check next_state
then while check func next_state
else next_state
Lets try it in GHCi:
*Main> while (<20) (+1) 0
20
This is the result of applying (+1) repeatedly while the result is less than 20, starting from 0.
*Main> while ((<20) . length) (++ "zob") ""
"zobzobzobzobzobzobzob"
This is the result of concatenating "zob" repeatedly while the result's length is less than 20, starting from the empty string.
So you can see I've defined a function that is (sort of a bit) analogous to a while loop from imperative languages. We didn't even need dedicated loop syntax for it! (which is the real reason Haskell has no such syntax; if you need this kind of thing you can express it as a function). It's not the only way to do so, and experienced Haskell programmers would probably use other standard library functions to do this kind of job, rather than writing while.
But I think it's useful to see how you can express this kind of thing in Haskell. It does show that you can't translate things like imperative loops directly into Haskell; I didn't end up translating your loop in terms of my while because it ends up pretty pointless; you never use the result of function1 or function2, they're called with no arguments so they'd always return the same thing in every iteration, and function3 likewise always returns the same thing, and can only return true or false to either cause while to keep looping or stop, with no information resulting.
Presumably in the C++ program they're all using side effects to actually get some work done. If they operate on in-memory things then you need to translate a bigger chunk of your program at once to Haskell for the translation of this loop to make any sense. If those functions are doing IO then you'll need to do this in the IO monad in Haskell, for which my while function doesn't work, but you can do something similar.
[1] As an aside, it's worth trying to understand that "you can't change variables" in Haskell isn't just an arbitrary restriction, nor is it just an acceptable trade off for the benefits of purity, it is a concept that doesn't make sense the way Haskell wants you to think about Haskell code. You're writing down expressions that result from evaluating functions on certain arguments: in f x = x + 1 you're saying that f x is x + 1. If you really think of it that way rather than thinking "f takes x, then adds one to it, then returns the result" then the concept of "having side effects" doesn't even apply; how could something existing and being equal to something else somehow change a variable, or have some other side effect?
You should write a solution to your problem in a more functional approach.
However, some code in haskell works a lot like imperative looping, take for example state monads, terminal recursivity, until, foldr, etc.
A simple example is the factorial. In C, you would write a loop where in haskell you can for example write fact n = foldr (*) 1 [2..n].
If you've two functions f :: a -> b and g :: b -> c where a, b, and c are types like String or [Int] then you can compose them simply by writing f . b.
If you wish them to loop over a list or vector you could write map (f . g) or V.map (f . g), assuming you've done Import qualified Data.Vector as V.
Example : I wish to print a list of markdown headings like ## <number>. <heading> ## but I need roman numerals numbered from 1 and my list headings has type type [(String,Double)] where the Double is irrelevant.
Import Data.List
Import Text.Numeral.Roman
let fun = zipWith (\a b -> a ++ ". " ++ b ++ "##\n") (map toRoman [1..]) . map fst
fun [("Foo",3.5),("Bar",7.1)]
What the hell does this do?
toRoman turns a number into a string containing the roman numeral. map toRoman does this to every element of a loop. map toRoman [1..] does it to every element of the lazy infinite list [1,2,3,4,..], yielding a lazy infinite list of roman numeral strings
fst :: (a,b) -> a simply extracts the first element of a tuple. map fst throws away our silly Meow information along the entire list.
\a b -> "##" ++ show a ++ ". " ++ b ++ "##" is a lambda expression that takes two strings and concatenates them together within the desired formatting strings.
zipWith :: (a -> b -> c) -> [a] -> [b] -> [c] takes a two argument function like our lambda expression and feeds it pairs of elements from it's own second and third arguments.
You'll observe that zip, zipWith, etc. only read as much of the lazy infinite list of Roman numerals as needed for the list of headings, meaning I've number my headings without maintaining any counter variable.
Finally, I have declared fun without naming it's argument because the compiler can figure it out from the fact that map fst requires one argument. You'll notice that put a . before my second map too. I could've written (map fst h) or $ map fst h instead if I'd written fun h = ..., but leaving the argument off fun meant I needed to compose it with zipWith after applying zipWith to two arguments of the three arguments zipWith wants.
I'd hope the compiler combines the zipWith and maps into one single loop via inlining.

Resources