Related
In working through a solution to the 8 Queens problem, a person used the following line of code:
sameDiag try qs = any (\(colDist,q) -> abs (try - q) == colDist) $ zip [1..] qs
try is an an item; qs is a list of the same items.
Can someone explain how colDist and q in the lambda function get bound to anything?
How did try and q used in the body of lambda function find their way into the same scope?
To the degree this is a Haskell idiom, what problem does this design approach help solve?
The function any is a higher-order function that takes 2 arguments:
the 1st argument is of type a -> Bool, i.e. a function from a to Bool
the 2nd argument is of type [a], i.e. a list of items of type a;
i.e. the 1st argument is a function that takes any element from the list passed as the 2nd argument, and returns a Bool based on that element. (well it can take any values of type a, not just the ones in that list, but it's quite obviously certain that any won't be invoking it with some arbitrary values of a but the ones from the list.)
You can then simplify thinking about the original snippet by doing a slight refactoring:
sameDiag :: Int -> [Int] -> Bool
sameDiag try qs = any f xs
where
xs = zip [1..] qs
f = (\(colDist, q) -> abs (try - q) == colDist)
which can be transformed into
sameDiag :: Int -> [Int] -> Bool
sameDiag try qs = any f xs
where
xs = zip [1..] qs
f (colDist, q) = abs (try - q) == colDist)
which in turn can be transformed into
sameDiag :: Int -> [Int] -> Bool
sameDiag try qs = any f xs
where
xs = zip [1..] qs
f pair = abs (try - q) == colDist) where (colDist, q) = pair
(Note that sameDiag could also have a more general type Integral a => a -> [a] -> Bool rather than the current monomorphic one)
— so how does the pair in f pair = ... get bound to a value? well, simple: it's just a function; whoever calls it must pass along a value for the pair argument. — when calling any with the first argument set to f, it's the invocation of the function any who's doing the calling of f, with individual elements of the list xs passed in as values of the argument pair.
and, since the contents of xs is a list of pairs, it's OK to pass an individual pair from this list to f as f expects it to be just that.
EDIT: a further explanation of any to address the asker's comment:
Is this a fair synthesis? This approach to designing a higher-order function allows the invoking code to change how f behaves AND invoke the higher-order function with a list that requires additional processing prior to being used to invoke f for every element in the list. Encapsulating the list processing (in this case with zip) seems the right thing to do, but is the intent of this additional processing really clear in the original one-liner above?
There's really no additional processing done by any prior to invoking f. There is just very minimalistic bookkeeping in addition to simply iterating through the passed in list xs: invoking f on the elements during the iteration, and immediately breaking the iteration and returning True the first time f returns True for any list element.
Most of the behavior of any is "implicit" though in that it's taken care of by Haskell's lazy evaluation, basic language semantics as well as existing functions, which any is composed of (well at least my version of it below, any' — I haven't taken a look at the built-in Prelude version of any yet but I'm sure it's not much different; just probably more heavily optimised).
In fact, any is simple it's almost trivial to re-implement it with a one liner on a GHCi prompt:
Prelude> let any' f xs = or (map f xs)
let's see now what GHC computes as its type:
Prelude> :t any'
any' :: (a -> Bool) -> [a] -> Bool
— same as the built-in any. So let's give it some trial runs:
Prelude> any' odd [1, 2, 3] -- any odd values in the list?
True
Prelude> any' even [1, 3] -- any even ones?
False
Prelude> let adult = (>=18)
Prelude> any' adult [17, 17, 16, 15, 17, 18]
— see how you can sometimes write code that almost looks like English with higher-order functions?
zip :: [a] -> [b] -> [(a,b)] takes two lists and joins them into pairs, dropping any remaining at the end.
any :: (a -> Bool) -> [a] -> Bool takes a function and a list of as and then returns True if any of the values returned true or not.
So colDist and q are the first and second elements of the pairs in the list made by zip [1..] qs, and they are bound when they are applied to the pair by any.
q is only bound within the body of the lambda function - this is the same as with lambda calculus. Since try was bound before in the function definition, it is still available in this inner scope. If you think of lambda calculus, the term \x.\y.x+y makes sense, despite the x and the y being bound at different times.
As for the design approach, this approach is much cleaner than trying to iterate or recurse through the list manually. It seems quite clear in its intentions to me (with respect to the larger codebase it comes from).
I'm trying to better understand Haskell's laziness, such as when it evaluates an argument to a function.
From this source:
But when a call to const is evaluated (that’s the situation we are interested in, here, after all), its return value is evaluated too ... This is a good general principle: a function obviously is strict in its return value, because when a function application needs to be evaluated, it needs to evaluate, in the body of the function, what gets returned. Starting from there, you can know what must be evaluated by looking at what the return value depends on invariably. Your function will be strict in these arguments, and lazy in the others.
So a function in Haskell always evaluates its own return value? If I have:
foo :: Num a => [a] -> [a]
foo [] = []
foo (_:xs) = map (* 2) xs
head (foo [1..]) -- = 4
According to the above paragraph, map (* 2) xs, must be evaluated. Intuitively, I would think that means applying the map to the entire list- resulting in an infinite loop.
But, I can successfully take the head of the result. I know that : is lazy in Haskell, so does this mean that evaluating map (* 2) xs just means constructing something else that isn't fully evaluated yet?
What does it mean to evaluate a function applied to an infinite list? If the return value of a function is always evaluated when the function is evaluated, can a function ever actually return a thunk?
Edit:
bar x y = x
var = bar (product [1..]) 1
This code doesn't hang. When I create var, does it not evaluate its body? Or does it set bar to product [1..] and not evaluate that? If the latter, bar is not returning its body in WHNF, right, so did it really 'evaluate' x? How could bar be strict in x if it doesn't hang on computing product [1..]?
First of all, Haskell does not specify when evaluation happens so the question can only be given a definite answer for specific implementations.
The following is true for all non-parallel implementations that I know of, like ghc, hbc, nhc, hugs, etc (all G-machine based, btw).
BTW, something to remember is that when you hear "evaluate" for Haskell it normally means "evaluate to WHNF".
Unlike strict languages you have to distinguish between two "callers" of a function, the first is where the call occurs lexically, and the second is where the value is demanded. For a strict language these two always coincide, but not for a lazy language.
Let's take your example and complicate it a little:
foo [] = []
foo (_:xs) = map (* 2) xs
bar x = (foo [1..], x)
main = print (head (fst (bar 42)))
The foo function occurs in bar. Evaluating bar will return a pair, and the first component of the pair is a thunk corresponding to foo [1..]. So bar is what would be the caller in a strict language, but in the case of a lazy language it doesn't call foo at all, instead it just builds the closure.
Now, in the main function we actually need the value of head (fst (bar 42)) since we have to print it. So the head function will actually be called. The head function is defined by pattern matching, so it needs the value of the argument. So fst is called. It too is defined by pattern matching and needs its argument so bar is called, and bar will return a pair, and fst will evaluate and return its first component. And now finally foo is "called"; and by called I mean that the thunk is evaluated (entered as it's sometimes called in TIM terminology), because the value is needed. The only reason the actual code for foo is called is that we want a value. So foo had better return a value (i.e., a WHNF). The foo function will evaluate its argument and end up in the second branch. Here it will tail call into the code for map. The map function is defined by pattern match and it will evaluate its argument, which is a cons. So map will return the following {(*2) y} : {map (*2) ys}, where I have used {} to indicate a closure being built. So as you can see map just returns a cons cell with the head being a closure and the tail being a closure.
To understand the operational semantics of Haskell better I suggest you look at some paper describing how to translate Haskell to some abstract machine, like the G-machine.
I always found that the term "evaluate," which I had learned in other contexts (e.g., Scheme programming), always got me all confused when I tried to apply it to Haskell, and that I made a breakthrough when I started to think of Haskell in terms of forcing expressions instead of "evaluating" them. Some key differences:
"Evaluation," as I learned the term before, strongly connotes mapping expressions to values that are themselves not expressions. (One common technical term here is "denotations.")
In Haskell, the process of forcing is IMHO most easily understood as expression rewriting. You start with an expression, and you repeatedly rewrite it according to certain rules until you get an equivalent expression that satisfies a certain property.
In Haskell the "certain property" has the unfriendly name weak head normal form ("WHNF"), which really just means that the expression is either a nullary data constructor or an application of a data constructor.
Let's translate that to a very rough set of informal rules. To force an expression expr:
If expr is a nullary constructor or a constructor application, the result of forcing it is expr itself. (It's already in WHNF.)
If expr is a function application f arg, then the result of forcing it is obtained this way:
Find the definition of f.
Can you pattern match this definition against the expression arg? If not, then force arg and try again with the result of that.
Substitute the pattern match variables in the body of f with the parts of (the possibly rewritten) arg that correspond to them, and force the resulting expression.
One way of thinking of this is that when you force an expression, you're trying to rewrite it minimally to reduce it to an equivalent expression in WHNF.
Let's apply this to your example:
foo :: Num a => [a] -> [a]
foo [] = []
foo (_:xs) = map (* 2) xs
-- We want to force this expression:
head (foo [1..])
We will need definitions for head and `map:
head [] = undefined
head (x:_) = x
map _ [] = []
map f (x:xs) = f x : map f x
-- Not real code, but a rule we'll be using for forcing infinite ranges.
[n..] ==> n : [(n+1)..]
So now:
head (foo [1..]) ==> head (map (*2) [1..]) -- using the definition of foo
==> head (map (*2) (1 : [2..])) -- using the forcing rule for [n..]
==> head (1*2 : map (*2) [2..]) -- using the definition of map
==> 1*2 -- using the definition of head
==> 2 -- using the definition of *
I believe the idea must be that in a lazy language if you're evaluating a function application, it must be because you need the result of the application for something. So whatever reason caused the function application to be reduced in the first place is going to continue to need to reduce the returned result. If we didn't need the function's result we wouldn't be evaluating the call in the first place, the whole application would be left as a thunk.
A key point is that the standard "lazy evaluation" order is demand-driven. You only evaluate what you need. Evaluating more risks violating the language spec's definition of "non-strict semantics" and looping or failing for some programs that should be able to terminate; lazy evaluation has the interesting property that if any evaluation order can cause a particular program to terminate, so can lazy evaluation.1
But if we only evaluate what we need, what does "need" mean? Generally it means either
a pattern match needs to know what constructor a particular value is (e.g. I can't know what branch to take in your definition of foo without knowing whether the argument is [] or _:xs)
a primitive operation needs to know the entire value (e.g. the arithmetic circuits in the CPU can't add or compare thunks; I need to fully evaluate two Int values to call such operations)
the outer driver that executes the main IO action needs to know what the next thing to execute is
So say we've got this program:
foo :: Num a => [a] -> [a]
foo [] = []
foo (_:xs) = map (* 2) xs
main :: IO ()
main = print (head (foo [1..]))
To execute main, the IO driver has to evaluate the thunk print (head (foo [1..])) to work out that it's print applied to the thunk head (foo [1..]). print needs to evaluate its argument on order to print it, so now we need to evaluate that thunk.
head starts by pattern matching its argument, so now we need to evaluate foo [1..], but only to WHNF - just enough to tell whether the outermost list constructor is [] or :.
foo starts by pattern matching on its argument. So we need to evaluate [1..], also only to WHNF. That's basically 1 : [2..], which is enough to see which branch to take in foo.2
The : case of foo (with xs bound to the thunk [2..]) evaluates to the thunk map (*2) [2..].
So foo is evaluated, and didn't evaluate its body. However, we only did that because head was pattern matching to see if we had [] or x : _. We still don't know that, so we must immediately continue to evaluate the result of foo.
This is what the article means when it says functions are strict in their result. Given that a call to foo is evaluated at all, its result will also be evaluated (and so, anything needed to evaluate the result will also be evaluated).
But how far it needs to be evaluated depends on the calling context. head is only pattern matching on the result of foo, so it only needs a result to WHNF. We can get an infinite list to WHNF (we already did so, with 1 : [2..]), so we don't necessarily get in an infinite loop when evaluating a call to foo. But if head were some sort of primitive operation implemented outside of Haskell that needed to be passed a completely evaluated list, then we'd be evaluating foo [1..] completely, and thus would never finish in order to come back to head.
So, just to complete my example, we're evaluating map (2 *) [2..].
map pattern matches its second argument, so we need to evaluate [2..] as far as 2 : [3..]. That's enough for map to return the thunk (2 *) 2 : map (2 *) [3..], which is in WHNF. And so it's done, we can finally return to head.
head ((2 *) 2 : map (2 *) [3..]) doesn't need to inspect either side of the :, it just needs to know that there is one so it can return the left side. So it just returns the unevaluated thunk (2 *) 2.
Again though, we only evaluated the call to head this far because print needed to know what its result is, so although head doesn't evaluate its result, its result is always evaluated whenever the call to head is.
(2 *) 2 evaluates to 4, print converts that into the string "4" (via show), and the line gets printed to the output. That was the entire main IO action, so the program is done.
1 Implementations of Haskell, such as GHC, do not always use "standard lazy evaluation", and the language spec does not require it. If the compiler can prove that something will always be needed, or cannot loop/error, then it's safe to evaluate it even when lazy evaluation wouldn't (yet) do so. This can often be faster so GHC optimizations do actually do this.
2 I'm skipping over a few details here, like that print does have some non-primitive implementation we could step inside and lazily evaluate, and that [1..] could be further expanded to the functions that actually implement that syntax.
Not necessarily. Haskell is lazy, meaning that it only evaluates when it needs to. This has some interesting effects. If we take the below code, for example:
-- File: lazinessTest.hs
(>?) :: a -> b -> b
a >? b = b
main = (putStrLn "Something") >? (putStrLn "Something else")
This is the output of the program:
$ ./lazinessTest
Something else
This indicates that putStrLn "Something" is never evaluated. But it's still being passed to the function, in the form of a 'thunk'. These 'thunks' are unevaluated values that, rather than being concrete values, are like a breadcrumb-trail of how to compute the value. This is how Haskell laziness works.
In our case, two 'thunks' are passed to >?, but only one is passed out, meaning that only one is evaluated in the end. This also applies in const, where the second argument can be safely ignored, and therefore is never computed. As for map, GHC is smart enough to realise that we don't care about the end of the array, and only bothers to compute what it needs to, in your case the second element of the original list.
However, it's best to leave the thinking about laziness to the compiler and keep coding, unless you're dealing with IO, in which case you really, really should think about laziness, because you can easily go wrong, as I've just demonstrated.
There are lots and lots of online articles on the Haskell wiki to look at, if you want more detail.
Function could evaluate either return type:
head (x:_) = x
or exception/error:
head _ = error "Head: List is empty!"
or bottom (⊥)
a = a
b = last [1 ..]
This question already has answers here:
What is the difference between . (dot) and $ (dollar sign)?
(13 answers)
Closed 8 years ago.
The first problem in 99 Haskell Problems is to "find the last element of a list". I came up with two solutions:
Solution 1 (This works)
myLast :: [a] -> a
myLast = head . reverse
Solution 2 (This doesn't work)
myLast :: [a] -> a
myLast = head $ reverse
Question
Why does solution 1 work but not solution 2? What I'm most confused by is how neither implementations supply pattern matching.
head is a function of one argument1. If you use f $ x to apply a function to something, it's the same as simply writing f x (or if you like f(x), or (f)x, all the same thing... just uglier): the argument is "filled in" by the specified variable. So the result of head $ reverse will simply be whatever result head gives you when fed with an argument of reverse's type... ...which however doesn't work, because head needs a list but reverse is a function. $ itself doesn't care about this but just hands on the parameter, for instance you could write
Prelude> :t map $ reverse
map $ reverse :: [[a]] -> [[a]]
because the first argument of map is in fact a function.
With (.) it's different. That cares about what type the argument to its right has (must also be a function), and it doesn't simply feed it to the left function right away. Rather, f . g yields another function, which does the following: it waits for some argument x, which it feeds to g, and then feeds the result of that to f.
Now, if you write
myLast' = head . reverse
it means just, you define myLast as this function that (.) gives you as the composition of head and reverse. That there are no arguments mentioned for myLast here doesn't matter: [a] -> a is just some type so you can define variables with this type (like myLast), by assigning them to values that happen to have such a function type (like head . reverse). You could, if you wanted, make the parameter explicit:
myLast'' x = (head . reverse) x
Note that the parens are needed because otherwise it's parsed as head . (reverse x) – which wouldn't work, because reverse x is not a function anymore, just a result list. And therefore you couldn't compose it with head; what you could however do is apply head to it:
myLast''' x = head $ reverse x
1In fact, every function in Haskell has just one argument... but we say "two-argument function" for things like (+) :: Int -> Int -> Int, though really this is a one-argument function returning a one-argument function returning an Int: (+) :: Int -> (Int -> Int).
Roughly, application ($) is used between a function and its argument (a list in this case), while composition (.) is used between two functions.
About pattern matching: in the first solution, the function reverse will perform the needed pattern match for you, so myLast does not have to.
I'm new to Haskell. I know I can create a reverse function by doing this:
reverse :: [a] -> [a]
reverse [] = []
reverse (x:xs) = (Main.reverse xs) ++ [x]
Is there such a thing as (xs:x) (a list concatenated with an element, i.e. x is the last element in the list) so that I put the last list element at the front of the list?
rotate :: [a] -> [a]
rotate [] = []
rotate (xs:x) = [x] ++ xs
I get these errors when I try to compile a program containing this function:
Occurs check: cannot construct the infinite type: a = [a]
When generalising the type(s) for `rotate'
I'm also new to Haskell, so my answer is not authoritative. Anyway, I would do it using last and init:
Prelude> last [1..10] : init [1..10]
[10,1,2,3,4,5,6,7,8,9]
or
Prelude> [ last [1..10] ] ++ init [1..10]
[10,1,2,3,4,5,6,7,8,9]
The short answer is: this is not possible with pattern matching, you have to use a function.
The long answer is: it's not in standard Haskell, but it is if you are willing to use an extension called View Patterns, and also if you have no problem with your pattern matching eventually taking longer than constant time.
The reason is that pattern matching is based on how the structure is constructed in the first place. A list is an abstract type, which have the following structure:
data List a = Empty | Cons a (List a)
deriving (Show) -- this is just so you can print the List
When you declare a type like that you generate three objects: a type constructor List, and two data constructors: Empty and Cons. The type constructor takes types and turns them into other types, i.e., List takes a type a and creates another type List a. The data constructor works like a function that returns something of type List a. In this case you have:
Empty :: List a
representing an empty list and
Cons :: a -> List a -> List a
which takes a value of type a and a list and appends the value to the head of the list, returning another list. So you can build your lists like this:
empty = Empty -- similar to []
list1 = Cons 1 Empty -- similar to 1:[] = [1]
list2 = Cons 2 list1 -- similar to 2:(1:[]) = 2:[1] = [2,1]
This is more or less how lists work, but in the place of Empty you have [] and in the place of Cons you have (:). When you type something like [1,2,3] this is just syntactic sugar for 1:2:3:[] or Cons 1 (Cons 2 (Cons 3 Empty)).
When you do pattern matching, you are "de-constructing" the type. Having knowledge of how the type is structured allows you to uniquely disassemble it. Consider the function:
head :: List a -> a
head (Empty) = error " the empty list have no head"
head (Cons x xs) = x
What happens on the type matching is that the data constructor is matched to some structure you give. If it matches Empty, than you have an empty list. If if matches Const x xs then x must have type a and must be the head of the list and xs must have type List a and be the tail of the list, cause that's the type of the data constructor:
Cons :: a -> List a -> List a
If Cons x xs is of type List a than x must be a and xs must be List a. The same is true for (x:xs). If you look to the type of (:) in GHCi:
> :t (:)
(:) :: a -> [a] -> [a]
So, if (x:xs) is of type [a], x must be a and xs must be [a] . The error message you get when you try to do (xs:x) and then treat xs like a list, is exactly because of this. By your use of (:) the compiler infers that xs have type a, and by your use of
++, it infers that xs must be [a]. Then it freaks out cause there's no finite type a for which a = [a] - this is what he's trying to tell you with that error message.
If you need to disassemble the structure in other ways that don't match the way the data constructor builds the structure, than you have to write your own function. There are two functions in the standard library that do what you want: last returns the last element of a list, and init returns all-but-the-last elements of the list.
But note that pattern matching happens in constant time. To find out the head and the tail of a list, it doesn't matter how long the list is, you just have to look to the outermost data constructor. Finding the last element is O(N): you have to dig until you find the innermost Cons or the innermost (:), and this requires you to "peel" the structure N times, where N is the size of the list.
If you frequently have to look for the last element in long lists, you might consider if using a list is a good idea after all. You can go after Data.Sequence (constant time access to first and last elements), Data.Map (log(N) time access to any element if you know its key), Data.Array (constant time access to an element if you know its index), Data.Vector or other data structures that match your needs better than lists.
Ok. That was the short answer (:P). The long one you'll have to lookup a bit by yourself, but here's an intro.
You can have this working with a syntax very close to pattern matching by using view patterns. View Patterns are an extension that you can use by having this as the first line of your code:
{-# Language ViewPatterns #-}
The instructions of how to use it are here: http://hackage.haskell.org/trac/ghc/wiki/ViewPatterns
With view patterns you could do something like:
view :: [a] -> (a, [a])
view xs = (last xs, init xs)
someFunction :: [a] -> ...
someFunction (view -> (x,xs)) = ...
than x and xs will be bound to the last and the init of the list you provide to someFunction. Syntactically it feels like pattern matching, but it is really just applying last and init to the given list.
If you're willing to use something different from plain lists, you could have a look at the Seq type in the containers package, as documented here. This has O(1) cons (element at the front) and snoc (element at the back), and allows pattern matching the element from the front and the back, through use of Views.
"Is there such a thing as (xs:x) (a list concatenated with an element, i.e. x is the last element in the list) so that I put the last list element at the front of the list?"
No, not in the sense that you mean. These "patterns" on the left-hand side of a function definition are a reflection of how a data structure is defined by the programmer and stored in memory. Haskell's built-in list implementation is a singly-linked list, ordered from the beginning - so the pattern available for function definitions reflects exactly that, exposing the very first element plus the rest of the list (or alternatively, the empty list).
For a list constructed in this way, the last element is not immediately available as one of the stored components of the list's top-most node. So instead of that value being present in pattern on the left-hand side of the function definition, it's calculated by the function body onthe right-hand side.
Of course, you can define new data structures, so if you want a new list that makes the last element available through pattern-matching, you could build that. But there's be some cost: Maybe you'd just be storing the list backwards, so that it's now the first element which is not available by pattern matching, and requires computation. Maybe you're storing both the first and last value in the structures, which would require additional storage space and bookkeeping.
It's perfectly reasonable to think about multiple implementations of a single data structure concept - to look forward a little bit, this is one use of Haskell's class/instance definitions.
Reversing as you suggested might be much less efficient. Last is not O(1) operation, but is O(N) and that mean that rotating as you suggested becomes O(N^2) alghorhim.
Source:
http://www.haskell.org/ghc/docs/6.12.2/html/libraries/base-4.2.0.1/src/GHC-List.html#last
Your first version has O(n) complexity. Well it is not, becuase ++ is also O(N) operation
you should do this like
rotate l = rev l []
where
rev [] a = a
rev (x:xs) a = rev xs (x:a)
source : http://www.haskell.org/ghc/docs/6.12.2/html/libraries/base-4.2.0.1/src/GHC-List.html#reverse
In your latter example, x is in fact a list. [x] becomes a list of lists, e.g. [[1,2], [3,4]].
(++) wants a list of the same type on both sides. When you are using it, you're doing [[a]] ++ [a] which is why the compiler is complaining. According to your code a would be the same type as [a], which is impossible.
In (x:xs), x is the first item of the list (the head) and xs is everything but the head, i.e., the tail. The names are irrelevant here, you might as well call them (head:tail).
If you really want to take the last item of the input list and put that in the front of the result list, you could do something like:
rotate :: [a] -> [a]
rotate [] = []
rotate lst = (last lst):(rotate $ init lst)
N.B. I haven't tested this code at all as I don't have a Haskell environment available at the moment.
iterate :: (a -> a) -> a -> [a]
(As you probably know) iterate is a function that takes a function and starting value. Then it applies the function to the starting value, then it applies the same function to the last result, and so on.
Prelude> take 5 $ iterate (^2) 2
[2,4,16,256,65536]
Prelude>
The result is an infinite list. (that's why I use take).
My question how would you implement your own iterate' function in Haskell, using only the basics ((:) (++) lambdas, pattern mataching, guards, etc.) ?
(Haskell beginner here)
Well, iterate constructs an infinite list of values a incremented by f. So I would start by writing a function that prepended some value a to the list constructed by recursively calling iterate with f a:
iterate :: (a -> a) -> a -> [a]
iterate f a = a : iterate f (f a)
Thanks to lazy evaluation, only that portion of the constructed list necessary to compute the value of my function will be evaluated.
Also note that you can find concise definitions for the range of basic Haskell functions in the report's Standard Prelude.
Reading through this list of straightforward definitions that essentially bootstrap a rich library out of raw primitives can be very educational and eye-opening in terms of providing a window onto the "haskell way".
I remember a very early aha moment on reading: data Bool = False | True.