Closures and list comprehensions in Haskell - haskell

I'm playing around with Haskell at the moment and thus stumbled upon the list comprehension feature.
Naturally, I would have used a closure to do this kind of thing:
Prelude> [x|x<-[1..7],x>4] -- list comprehension
[5,6,7]
Prelude> filter (\x->x>4) [1..7] -- closure
[5,6,7]
I still don't feel this language, so which way would a Haskell programmer go?
What are the differences between these two solutions?

Idiomatic Haskell would be filter (> 4) [1..7]
Note that you are not capturing any of the lexical scope in your closure, and are instead making use of a sectioned operator. That is to say, you want a partial application of >, which operator sections give you immediately. List comprehensions are sometimes attractive, but the usual perception is that they do not scale as nicely as the usual suite of higher order functions ("scale" with respect to more complex compositions). That kind of stylistic decision is, of course, largely subjective, so YMMV.

List comprehensions come in handy if the elements are somewhat complex and one needs to filter them by pattern matching, or the mapping part feels too complex for a lambda abstraction, which should be short (or so I feel), or if one has to deal with nested lists. In the latter case, a list comprehension is often more readable than the alternatives (to me, anyway).
For example something like:
[ (f b, (g . fst) a) | (Just a, Right bs) <- somelist, a `notElem` bs, (_, b) <- bs ]
But for your example, the section (>4) is a really nice way to write (\a -> a > 4) and because you use it only for filtering, most people would prefer ANthonys solution.

Related

Evaluation of list-comprehensions in Haskell

I am wondering exactly how list-comprehensions are evaluated in Haskell. After reading this Removing syntactic sugar: List comprehension in Haskell and this: Haskell Lazy Evaluation and Reuse I still don't really understand if
[<function> x|x <- <someList>, <somePredicate>]
is actually exactly equivalent (not just in outcome but in evaluation) to
filter (<somePredicate>) . map (<function>) $ <someList>
and if so, does this mean it can potentially reduce time complexity drastically to build up the desired list with only desired elements?
Also, how does this work in terms of infinite lists? To be specific: I assume something like:
[x|x <- [1..], x < 20]
will be evaluated in finite time, but how "obvious" does the fact that there are no more elements above some value which satisfy the predicate need to be, for the compiler to consider it? Would
[x|x <- [1..], (sum.map factorial $ digits x) == x]
work (see project Euler problem 34 https://projecteuler.net/problem=34). There is obviously an upper bound because from some x on x*9! < 10^n -1 always holds, but do I need to supply that bound or will the compiler find it?
There's nothing obvious about the fact that a particular infinite sequence has no more elements matching a predicate. When you pass a list to filter, it has no way of knowing any other properties of the elements than that an element can be passed to the predicate.
You can write your own version of Ord a => List a which can describe a sequence as ascending or not, and a version of filter that can use this information to stop looking at elements past a particular threshold. Unfortunately, list comprehensions won't use either of them.
Instead, I'd use a combination of takeWhile and a comprehension without a predicate / a plain map. Somewhere in the takeWhile arguments, you will supply the compiler the information about the expected upper bound; for a number of n decimal digits, it would be 10^n.
[<function> x|x <- <someList>, <somePredicate>]
should always evaluate to the same result as
filter (<somePredicate>) . map (<function>) $ <someList>
However, there is no guarantee that this is how the compiler will actually do it. The section on list comprehensions in the Haskell Report only mentions what list comprehensions should do, not how they should work. So each compiler is free to do as its developers find best. Therefore, you should not assume anything about the performance of list comprehensions or that the compiler will do any optimizations.

What is practical use of monoids?

I'm reading Learn You a Haskell and I've already covered applicative and now I'm on monoids. I have no problem understanding the both, although I found applicative useful in practice and monoid isn't quite so. So I think I don't understand something about Haskell.
First, speaking of Applicative, it creates something like uniform syntax to perform various actions on 'containers'. So we can use normal functions to perform actions on Maybe, lists, IO (should I have said monads? I don't know monads yet), functions:
λ> :m + Control.Applicative
λ> (+) <$> (Just 10) <*> (Just 13)
Just 23
λ> (+) <$> [1..5] <*> [1..5]
[2,3,4,5,6,3,4,5,6,7,4,5,6,7,8,5,6,7,8,9,6,7,8,9,10]
λ> (++) <$> getLine <*> getLine
one line
and another one
"one line and another one"
λ> (+) <$> (* 7) <*> (+ 7) $ 10
87
So applicative is an abstraction. I think we can live without it but it helps express some ideas mode clearly and that's fine.
Now, let's take a look at Monoid. It is also abstraction and pretty simple one. But does it help us? For every example from the book it seems to be obvious that there is more clear way to do things:
λ> :m + Data.Monoid
λ> mempty :: [a]
[]
λ> [1..3] `mappend` [4..6]
[1,2,3,4,5,6]
λ> [1..3] ++ [4..6]
[1,2,3,4,5,6]
λ> mconcat [[1,2],[3,6],[9]]
[1,2,3,6,9]
λ> concat [[1,2],[3,6],[9]]
[1,2,3,6,9]
λ> getProduct $ Product 3 `mappend` Product 9
27
λ> 3 * 9
27
λ> getProduct $ Product 3 `mappend` Product 4 `mappend` Product 2
24
λ> product [3,4,2]
24
λ> getSum . mconcat . map Sum $ [1,2,3]
6
λ> sum [1..3]
6
λ> getAny . mconcat . map Any $ [False, False, False, True]
True
λ> or [False, False, False, True]
True
λ> getAll . mconcat . map All $ [True, True, True]
True
λ> and [True, True, True]
True
So we have noticed some patterns and created new type class... Fine, I like math. But from practical point of view, what the point of Monoid? How does it help us better express ideas?
Gabriel Gonzalez wrote in his blog great information about why you should care, and you truly should care. You can read it here (and also see this).
It's about scalability, architecture & design of API. The idea is that there's the "Conventional architecture" that says:
Combine a several components together of type A to generate a
"network" or "topology" of type B
The issue with this kind of design is that as your program scales, so does your hell when you refactor.
So you want to change module A to improve your design or domain, so you do. Oh, but now module B & C that depend on A broke. You fix B, great. Now you fix C. Now B broke again, as B also used some of C's functionality. And I can go on with this forever, and if you ever used OOP - so can you.
Then there's what Gabriel calls the "Haskell architecture":
Combine several components together of type A to generate a new
component of the same type A, indistinguishable in character from its substituent parts
This solves the issue, elegantly too. Basically: do not layer your modules or extend to make specialized ones.
Instead, combine.
So now, what's encouraged is that instead of saying things like "I have multiple X, so let's make a type to represent their union", you say "I have multiple X, so let's combine them into an X". Or in simple English: "Let's make composable types in the very first place." (do you sense the monoids' lurking yet?).
Imagine you want to make a form for your webpage or application, and you have the module "Personal Information Form" that you created because you needed personal information. Later you found that you also need "Change Picture Form" so quickly wrote that. And now you say I want to combine them, so let's make a "Personal Information & Picture Form" module. And in real life scalable applications this can and does get out of hand. Probably not with forms but to demonstrate, you need to compose and compose so you will end up with "Personal Information & Change Picture & Change Password & Change Status & Manage Friends & Manage Wishlist & Change View Settings & Please don't extend me anymore & please & please stop! & STOP!!!!" module. This is not pretty, and you will have to manage this complexity in the API. Oh, and if you want change anything - it probably has dependencies. So.. yeah.. Welcome to hell.
Now let's look at the other option, but first let's look at the benefit because it will guide us to it:
These abstractions scale limitlessly because they always preserve
combinability, therefore we never need to layer further abstractions
on top. This is one reason why you should learn Haskell: you learn how
to build flat architectures.
Sounds good, so, instead of making "Personal Information Form" / "Change Picture Form" module, stop and think if we can make anything here composable. Well, we can just make a "Form", right? would be more abstract too.
Then it can make sense to construct one for everything you want, combine them together and get one form just like any other.
And so, you don't get a messy complex tree anymore, because of the key that you take two forms and get one form. So Form -> Form -> Form. And as you can already see clearly, this signature is an instance of mappend.
The alternative, and the conventional architecture would probably look like a -> b -> c and then c -> d -> e and then...
Now, with forms it's not so challenging; the challenge is to work with this in real world applications. And to do that simply ask yourself as much as you can (because it pays off, as you can see): How can I make this concept composable? and since monoids are such a simple way to achieve that (we want simple) ask yourself first: How is this concept a monoid?
Sidenote: Thankfully Haskell will very much discourage you to extend types as it is a functional language (no inheritance). But it's still possible to make a type for something, another type for something, and in the third type to have both types as fields. If this is for composition - see if you can avoid it.
Fine, I like math. But from practical point of view, what the point of Monoid? How does it help us better express ideas?
It's an API. A simple one. For types that support:
having a zero element
having an append operation
Lots of types support these operations. So having a name for the operations and an API helps us capture the fact more clearly.
APIs are good because they let us reuse code, and reuse concepts. Meaning better, more maintainable code.
A very simple example is foldMap. Just by plugging in different monoids into this single function, you can compute:
the first and the last element,
the sum or the product of elements (from this also their average etc.),
check if all elements or any has a given property,
compute the maximal or minimal element,
map the elements to a collection (like lists, sets, strings, Text, ByteString or ByteString Builder) and concatenate them together - they're all monoids.
Moreover, monoids are composable: if a and b are monoids, so is (a, b). So you can easily compute several different monoidal values in one pass (like the sum and the product when computing the average of elements etc).
And although you can do all this without monoids, using foldr or foldl, it's much more cumbersome and also often less effective: for example, if you have a balanced binary tree and you want to find its minimum and maximum element, you can't do both effectively with foldr (or both with foldl), one will be always O(n) for one of the cases, while when using foldMap with appropriate monoids, it'll be O(log n) in both cases.
And this was all just a single function foldMap! There are many other interesting applications. To give one, exponentiation by squaring is an efficient way for computing powers. But it's not actually tied to computing powers. You can implement it for any monoid, and if its <> is O(1), you have an efficient way of computing n-times x <> ... <> x. And suddenly you can do efficient matrix exponentiation and compute n-th Fibonacci number with just O(log n) multipications. See times1p in semigroup.
See also Monoids and Finger Trees.
The point to it is that when you tag an Int as Product, you express your intent for the integers to be multiplied. And by tagging them as Sum, to be added together.
Then you can use the same mconcat on both. This is used e.g. in Foldable where one foldMap expresses the idea of folding over a containing structure, while combining the elements in a specific monoid's kind of way.

Are monads expressions, or are there statements in Haskell?

I have an ontological question about monads in haskell; I'm shaky on whether the language makes a distinction between statements and expressions at all. For example, I feel like in most other languages anything with a signature like a -> SomeMonadProbs () would be considered a statement. That said, since haskell is purely functional, and functions are composed of expressions, I'm a wee bit confused on what haskell would say about monads in terms of their expression-hood.
Monad is just one interface for interacting with expressions. For example, consider this list comprehension implemented using do notation:
example :: [(Int, Int)]
example = do
x <- [1..3]
y <- [4..6]
return (x, y)
That desugars to:
[1..3] >>= \x ->
[4..6] >>= \y ->
return (x, y)
... and substituting in the definition of (>>=) for lists gives:
concatMap (\x -> concatMap (\y -> [(x, y)]) [4..6]) [1..3]
The important idea is that anything you can do using do notation can be replaced with calls to (>>=).
The closest thing to "statements" in Haskell are syntactic lines of a do notation block, such as:
x <- [1..3]
These lines do not correspond to isolated expressions, but rather syntactic fragments of an expression which are not self-contained:
[1..3] >>= \x -> ... {incomplete lambda}
So it's really more appropriate to say that everything is an expression in Haskell, and do notation gives you something which appears like a bunch of statements but actually desugars to a bunch of expressions under the hood.
Here are few thoughts.
a >>= b is an application just like any other application, so from syntactic point of view there are clearly no statements in Haskell, only expressions.
From semantic point of view (see for example Tackling the awkward squad paper) there are "denotational" and "operational" fragments of Haskell semantics.
The denotational fragment treats >>= similar to a data constructor, so it considers a >>= b to be in WHNF. The "operational" fragment "deconstructs" the values in IO monad and performs different effects in the process.
When reasoning about programs, you often don't need to consider the "operational" fragment at all. For example, when you refactor foo a >> foo a into let bar = foo a in bar >> bar you don't care about the nature of foo, so IO actions are indistinguishable from any other values here.
It's where Haskell shines, and it's tempting to say there are no statements at all, however it leads to funny and somewhat paradoxical conclusion. For example, C preprocessor language can be considered a denotational fragment of C. So C has denotational and operational fragments too, but nobody says that C is purely functional or has no statements. See The C language is purely functional post for a detailed treatment of this matter.
Haskell of course differs from C quantitatively: its denotational fragment is expressive enough to be practically useful, so you have to think about underlying transitions in its operational semantics less often than in C.
But when you have to think about those transitions, like when reasoning about the order of data written to a network socket, you have to resort to that statement-after-statement thinking.
So while IO actions are not themselves statements and in a certain narrow technical sense there are no statements at all, the actions represent the statements so I think it's fair to say that statements are present in Haskell in a very indirect form.
whether the language makes a distinction between statements and expressions at all
It does not. There are no productions for "statement" or anything like that in the grammar, and nothing is called "statement" or anything equivalent (as far as I know) in the language description.
The language report calls elements inside the do notation "statements". There are two kinds of statements that are not expressions: pat <- exp`` andlet decls`.
in most other languages anything with a signature like a -> SomeMonadProbs () would be considered a statement
Haskell is different from most other languages. That's kinda its point (not being different for the sake of it, obviously, but unifying expressions and statements into a single construct).

Why are if expressions frowned upon in Haskell?

This has been a question I've been wondering for a while. if statements are staples in most programming languages (at least then ones I've worked with), but in Haskell it seems like it is quite frowned upon. I understand that for complex situations, Haskell's pattern matching is much cleaner than a bunch of ifs, but is there any real difference?
For a simple example, take a homemade version of sum (yes, I know it could just be foldr (+) 0):
sum :: [Int] -> Int
-- separate all the cases out
sum [] = 0
sum (x:xs) = x + sum xs
-- guards
sum xs
| null xs = 0
| otherwise = (head xs) + sum (tail xs)
-- case
sum xs = case xs of
[] -> 0
_ -> (head xs) + sum (tail xs)
-- if statement
sum xs = if null xs then 0 else (head xs) + sum (tail xs)
As a second question, which one of these options is considered "best practice" and why? My professor way back when always used the first method whenever possible, and I'm wondering if that's just his personal preference or if there was something behind it.
The problem with your examples is not the if expressions, it's the use of partial functions like head and tail. If you try to call either of these with an empty list, it throws an exception.
> head []
*** Exception: Prelude.head: empty list
> tail []
*** Exception: Prelude.tail: empty list
If you make a mistake when writing code using these functions, the error will not be detected until run time. If you make a mistake with pattern matching, your program will not compile.
For example, let's say you accidentally switched the then and else parts of your function.
-- Compiles, throws error at run time.
sum xs = if null xs then (head xs) + sum (tail xs) else 0
-- Doesn't compile. Also stands out more visually.
sum [] = x + sum xs
sum (x:xs) = 0
Note that your example with guards has the same problem.
I think the Boolean Blindness article answers this question very well. The problem is that boolean values have lost all their semantic meaning as soon as you construct them. That makes them a great source for bugs and also makes the code more difficult to understand.
Your first version, the one preferred by your prof, has the following advantages compared to the others:
no mention of null
list components are named in the pattern, so no mention of head and tail.
I do think that this one is considered "best practice".
What's the big deal? Why would we want to avoid especially head and tail? Well, everybody knows that those functions are not total, so one automatically tries to make sure that all cases are covered. A pattern match on [] not only stands out more than null xs, a series of pattern matches can be checked by the compiler for completeness. Hence, the idiomatic version with complete pattern match is easier to grasp (for the trained Haskell reader) and to proof exhaustive by the compiler.
The second version is slightly better than the last one because one sees at once that all cases are handled. Still, in the general case the RHS of the second equation could be longer and there could be a where clauses with a couple of definitions, the last of them could be something like:
where
... many definitions here ...
head xs = ... alternative redefnition of head ...
To be absolutly sure to understand what the RHS does, one has to make sure common names have not been redefined.
The 3rd version is the worst one IMHO: a) The 2nd match fails to deconstruct the list and still uses head and tail. b) The case is slightly more verbose than the equivalent notation with 2 equations.
In many programming languages, if-statements are fundamental primitives, and things like switch-blocks are just syntax sugar to make deeply-nested if-statements easier to write.
Haskell does it the other way around. Pattern matching is the fundamental primitive, and an if-expression is literally just syntax sugar for pattern matching. Similarly, constructs like null and head are simply user-defined functions, which are all ultimately implemented using pattern matching. So pattern matching is the thing at the bottom of it all. (And therefore potentially more efficient than calling user-defined functions.)
In many cases - such as the ones you list above - it's simply a matter of style. The compiler can almost certainly optimise things to the point where all versions are roughly equal in performance. But generally [not always!] pattern matching makes it clearer exactly what you're trying to achieve.
(It's annoyingly easy to write an if-expression where you get the two alternatives the wrong way around. You'd think it would be a rare mistake, but it's surprisingly common. With a pattern match, there's little chance of making that specific mistake, although there's still plenty of other things to screw up.)
Each call to null, head and tail entails a pattern match. But the 1st version in your answer does just one pattern match, and reuses its results through named components of the pattern.
Just for that, it is better. But it is also more visually apparent, more readable.
Pattern matching is better than a string of if-then-else statements for (at least) the following reasons:
it is more declarative
it interacts well with sum-types
Pattern matching helps to reduce the amount of "accidental complexity" in your code - that is, code that is really more about implementation details rather than the essential logic of your program.
In most other languages when the compier/run-time sees a string of if-then-else statements it has no choice but to test the conditions in exactly the order the programmer specified them. But pattern matching encourages the programmer to focus more on describing what should happen versus how things should be performed. Due to purity and immutability of values in Haskell the compiler can consider the collection of patterns as a whole and decide the how best to implement them.
An analogy would be C's switch statement. If you dump the assembly code for various switch statements you will see that sometimes the compiler will generate a chain/tree of comparisons and in other cases it will generate a jump table. The programmer uses the same syntax in both cases - the compiler chooses the implementation based on what the comparison values are. If they form a contiguous block of values the jump table method is used, otherwise a comparison tree is used. And this separation of concerns allows the compiler to implement even more strategies in the future if other patterns among the comparison values are detected.

Why does Haskell's `head` crash on an empty list (or why *doesn't* it return an empty list)? (Language philosophy)

Note to other potential contributors: Please don't hesitate to use abstract or mathematical notations to make your point. If I find your answer unclear, I will ask for elucidation, but otherwise feel free to express yourself in a comfortable fashion.
To be clear: I am not looking for a "safe" head, nor is the choice of head in particular exceptionally meaningful. The meat of the question follows the discussion of head and head', which serve to provide context.
I've been hacking away with Haskell for a few months now (to the point that it has become my main language), but I am admittedly not well-informed about some of the more advanced concepts nor the details of the language's philosophy (though I am more than willing to learn). My question then is not so much a technical one (unless it is and I just don't realize it) as it is one of philosophy.
For this example, I am speaking of head.
As I imagine you'll know,
Prelude> head []
*** Exception: Prelude.head: empty list
This follows from head :: [a] -> a. Fair enough. Obviously one cannot return an element of (hand-wavingly) no type. But at the same time, it is simple (if not trivial) to define
head' :: [a] -> Maybe a
head' [] = Nothing
head' (x:xs) = Just x
I've seen some little discussion of this here in the comment section of certain statements. Notably, one Alex Stangl says
'There are good reasons not to make everything "safe" and to throw exceptions when preconditions are violated.'
I do not necessarily question this assertion, but I am curious as to what these "good reasons" are.
Additionally, a Paul Johnson says,
'For instance you could define "safeHead :: [a] -> Maybe a", but now instead of either handling an empty list or proving it can't happen, you have to handle "Nothing" or prove it can't happen.'
The tone that I read from that comment suggests that this is a notable increase in difficulty/complexity/something, but I am not sure that I grasp what he's putting out there.
One Steven Pruzina says (in 2011, no less),
"There's a deeper reason why e.g 'head' can't be crash-proof. To be polymorphic yet handle an empty list, 'head' must always return a variable of the type which is absent from any particular empty list. It would be Delphic if Haskell could do that...".
Is polymorphism lost by allowing empty list handling? If so, how so, and why? Are there particular cases which would make this obvious? This section amply answered by #Russell O'Connor. Any further thoughts are, of course, appreciated.
I'll edit this as clarity and suggestion dictates. Any thoughts, papers, etc., you can provide will be most appreciated.
Is polymorphism lost by allowing empty
list handling? If so, how so, and why?
Are there particular cases which would
make this obvious?
The free theorem for head states that
f . head = head . $map f
Applying this theorem to [] implies that
f (head []) = head (map f []) = head []
This theorem must hold for every f, so in particular it must hold for const True and const False. This implies
True = const True (head []) = head [] = const False (head []) = False
Thus if head is properly polymorphic and head [] were a total value, then True would equal False.
PS. I have some other comments about the background to your question to the effect of if you have a precondition that your list is non-empty then you should enforce it by using a non-empty list type in your function signature instead of using a list.
Why does anyone use head :: [a] -> a instead of pattern matching? One of the reasons is because you know that the argument cannot be empty and do not want to write the code to handle the case where the argument is empty.
Of course, your head' of type [a] -> Maybe a is defined in the standard library as Data.Maybe.listToMaybe. But if you replace a use of head with listToMaybe, you have to write the code to handle the empty case, which defeats this purpose of using head.
I am not saying that using head is a good style. It hides the fact that it can result in an exception, and in this sense it is not good. But it is sometimes convenient. The point is that head serves some purposes which cannot be served by listToMaybe.
The last quotation in the question (about polymorphism) simply means that it is impossible to define a function of type [a] -> a which returns a value on the empty list (as Russell O'Connor explained in his answer).
It's only natural to expect the following to hold: xs === head xs : tail xs - a list is identical to its first element, followed by the rest. Seems logical, right?
Now, let's count the number of conses (applications of :), disregarding the actual elements, when applying the purported 'law' to []: [] should be identical to foo : bar, but the former has 0 conses, while the latter has (at least) one. Uh oh, something's not right here!
Haskell's type system, for all its strengths, is not up to expressing the fact that you should only call head on a non-empty list (and that the 'law' is only valid for non-empty lists). Using head shifts the burden of proof to the programmer, who should make sure it's not used on empty lists. I believe dependently typed languages like Agda can help here.
Finally, a slightly more operational-philosophical description: how should head ([] :: [a]) :: a be implemented? Conjuring a value of type a out of thin air is impossible (think of uninhabited types such as data Falsum), and would amount to proving anything (via the Curry-Howard isomorphism).
There are a number of different ways to think about this. So I am going to argue both for and against head':
Against head':
There is no need to have head': Since lists are a concrete data type, everything that you can do with head' you can do by pattern matching.
Furthermore, with head' you're just trading off one functor for another. At some point you want to get down to brass tacks and get some work done on the underlying list element.
In defense of head':
But pattern matching obscures what's going on. In Haskell we are interested in calculating functions, which is better accomplished by writing them in point-free style using compositions and combinators.
Furthermore, thinking about the [] and Maybe functors, head' allows you to move back and forth between them (In particular the Applicative instance of [] with pure = replicate.)
If in your use case an empty list makes no sense at all, you can always opt to use NonEmpty instead, where neHead is safe to use. If you see it from that angle, it's not the head function that is unsafe, it's the whole list data-structure (again, for that use case).
I think this is a matter of simplicity and beauty. Which is, of course, in the eye of the beholder.
If coming from a Lisp background, you may be aware that lists are built of cons cells, each cell having a data element and a pointer to next cell. The empty list is not a list per se, but a special symbol. And Haskell goes with this reasoning.
In my view, it is both cleaner, simpler to reason about, and more traditional, if empty list and list are two different things.
...I may add - if you are worried about head being unsafe - don't use it, use pattern matching instead:
sum [] = 0
sum (x:xs) = x + sum xs

Resources