Automatically inserting laziness in Haskell - haskell

Haskell pattern matching is often head strict, for example,f (x:xs) = ...
requires input list to be evaluated to (thunk : thunk). But sometimes such evaluation is not needed and function can afford to be non-strict on some arguments, for example f (x:xs) = 3.
Ideally, in such situations we could avoid evaluating arguments to get the behaviour of const 3, which could be done with irrefutable pattern: f ~(x:xs) = 3. This gives us performance benefits and greater error tolerance.
My question is: Does GHC already implement such transformations via some kind of strictness analysis? Appreciate it if you could also point me to some readings on it.

As far as I know, GHC will never make something more lazy than specified by the programmer, even if it can prove that doesn't change the semantics of the term. I don't think there is any fundamental reason to avoid changing the laziness of a term when we can prove the semantics don't change; I suspect it's more of an empirical observation that we don't know of any situations where that would be a really great idea. (And if a transformation would change the semantics, I would consider it a bug for GHC to make that change.)
There is only one possible exception that comes to mind, the so-called "full laziness" transformation, described well on the wiki. In short, GHC will translate
\a b -> let c = {- something that doesn't mention b -} in d
to
\a -> let c = {- same thing as before -} in \b -> d
to avoid recomputing c each time the argument is applied to a new b. But it seems to me that this transformation is more about memoization than about laziness: the two terms above appear to me to have the same (denotational) semantics wrt laziness/strictness, and are only operationally different.

Related

What does seq actually do in Haskell?

From Real World Haskell I read
It operates as follows: when a seq expression is evaluated, it forces its first argument to be evaluated, then returns its second argument. It doesn't actually do anything with the first argument: seq exists solely as a way to force that value to be evaluated.
where I've emphasised the then because to me it implies an order in which the two things happen.
From Hackage I read
The value of seq a b is bottom if a is bottom, and otherwise equal to b. In other words, it evaluates the first argument a to weak head normal form (WHNF). seq is usually introduced to improve performance by avoiding unneeded laziness.
A note on evaluation order: the expression seq a b does not guarantee that a will be evaluated before b. The only guarantee given by seq is that the both a and b will be evaluated before seq returns a value. In particular, this means that b may be evaluated before a. […]
Furthermore, if I click on the # Source link from there, the page doesn't exist, so I can't see the code of seq.
That seems in line with a comment under this answer:
[…] seq cannot be defined in normal Haskell
On the other hand (or on the same hand, really), another comment reads
The 'real' seq is defined in GHC.Prim as seq :: a -> b -> b; seq = let x = x in x. This is only a dummy definition. Basically seq is specially syntax handled particularly by the compiler.
Can anybody shed some light on this topic? Especially in the following respects.
What source is right?
Is seq's implementation really not writable in Haskell?
If so, what does it even mean? That it is a primitive? What does this tell me about what seq actually does?
In seq a b is a guaranteed to be evaluated before b at least in the case that b makes use of a, e.g. seq a (a + x)?
Others answers have already discussed the meaning of seq and its relationship to pseq. But there appears to be quite some confusion about what exactly the implications of seq’s caveats are.
It is true, technically speaking, that a `seq` b does not guarantee a will be evaluated before b. This may seem troubling: how could it possibly serve its purpose if that were the case? Let’s consider the example Jon gave in their answer:
foldl' :: (a -> b -> a) -> a -> [b] -> a
foldl' f acc [] = acc
foldl' f acc (x : xs)
= acc' `seq` foldl' f acc' xs
where
acc' = f acc x
Surely, we care about acc' being evaluated before the recursive call here. If it is not, the whole purpose of foldl' is lost! So why not use pseq here? And is seq really all that useful?
Fortunately, the situation is not actually so dire. seq really is the right choice here. GHC would never actually choose to compile foldl' such that it evaluates the recursive call before evaluating acc', so the behavior we want is preserved. The difference between seq and pseq is rather what flexibility the optimizer has to make a different decision when it thinks it has particularly good reason to.
Understanding seq and pseq’s strictness
To understand what that means, we must learn to think a little like the GHC optimizer. In practice, the only concrete difference between seq and pseq is how they affect the strictness analyzer:
seq is considered strict in both of its arguments. That is, in a function definition like
f a b c = (a `seq` b) + c
f will be considered strict in all three of its arguments.
pseq is just like seq, but it’s only considered strict in its first argument, not its second one. That means in a function definition like
g a b c = (a `pseq` b) + c
g will be considered strict in a and c, but not b.
What does this mean? Well, let’s first define what it means for a function to “be strict in one of its arguments” in the first place. The idea is that if a function is strict in one of its arguments, then a call to that function is guaranteed to evaluate that argument. This has several implications:
Suppose we have a function foo :: Int -> Int that is strict in its argument, and suppose we have a call to foo that looks like this:
foo (x + y)
A naïve Haskell compiler would construct a thunk for the expression x + y and pass the resulting thunk to foo. But we know that evaluating foo will necessarily force that thunk, so we’re not gaining anything from this laziness. It would be better to evaluate x + y immediately, then pass the result to foo to save a needless thunk allocation.
Since we know there’s never any reason to pass a thunk to foo, we gain an opportunity to make additional optimizations. For example, the optimizer could choose to internally rewrite foo to take an unboxed Int# instead of an Int, avoiding not only a thunk construction for x + y but avoiding boxing the resulting value altogether. This allows the result of x + y to be passed directly, on the stack, rather than on the heap.
As you can see, strictness analysis is rather crucial to making an efficient Haskell compiler, as it allows the compiler to make far more intelligent decisions about how to compile function calls, among other things. For that reason, we generally want strictness analysis to find as many opportunities to eagerly evaluate things as possible, letting us save on useless heap allocations.
With this in mind, let’s return to our f and g examples above. Let’s think about what strictness we’d intuitively expect these functions to have:
Recall that the body of f is (a `seq` b) + c. Even if we ignore the special properties of seq altogether, we know that it eventually evaluates to its second argument. This means f ought to be at least as strict as if its body were just b + c (with a entirely unused).
We know that evaluating b + c must fundamentally evaluate both b and c, so f must, at the very least, be strict in both b and c. Whether it’s strict in a is the more interesting question. If seq were actually just flip const, it would not be, as a would not be used, but of course the whole point of seq is to introduce artificial strictness, so in fact f is also considered strict in a.
Happily, the strictness of f I mentioned above is entirely consistent with our intuition about what strictness it ought to have. f is strict in all of its arguments, precisely as we would expect.
Intuitively, all of the above reasoning for f should also apply to g. The only difference is the replacement of seq with pseq, and we know that pseq provides a stronger guarantee about evaluation order than seq does, so we’d expect g to be at least as strict as f… which is to say, also strict in all its arguments.
However, remarkably, this is not the strictness GHC infers for g. GHC considers g strict in a and c, but not b, even though by our definition of strictness above, g is rather obviously strict in b: b must be evaluated for g to produce a result! As we’ll see, it is precisely this discrepancy that makes pseq so deeply magical, and why it’s generally a bad idea.
The implications of strictness
We’ve now seen that seq leads to the strictness we’d expect while pseq does not, but it’s not immediately obvious what that implies. To illustrate, consider a possible call site where f is used:
f a (b + 1) c
We know that f is strict in all its arguments, so by the same reasoning we used above, GHC should evaluate b + 1 eagerly and pass its result to f, avoiding a thunk.
At first blush, this might seem all well and good, but wait: what if a is a thunk? Even though f is also strict in a, it’s just a bare variable—maybe it was passed in as an argument from somewhere else—and there’s no reason for GHC to eagerly force a here if f is going to force it itself. The only reason we force b + 1 is to spare a new thunk from being created, but we save nothing but forcing the already-created a at the call site. This means a might in fact be passed as an unevaluated thunk.
This is something of a problem, because in the body of f, we wrote a `seq` b, requesting a be evaluated before b. But by our reasoning above, GHC just went ahead and evaluated b first! If we really, really need to make sure b isn’t evaluated until after a is, this type of eager evaluation can’t be allowed.
Of course, this is precisely why pseq is considered lazy in its second argument, even though it actually is not. If we replace f with g, then GHC would obediently allocate a fresh thunk for b + 1 and pass it on the heap, ensuring it is not evaluated a moment too soon. This of course means more heap allocation, no unboxing, and (worst of all) no propagation of strictness information further up the call chain, creating potentially cascading pessimizations. But hey, that’s what we asked for: avoid evaluating b too early at all costs!
Hopefully, this illustrates why pseq is seductive, but ultimately counterproductive unless you really know what you’re doing. Sure, you guarantee the evaluation you’re looking for… but at what cost?
The takeaways
Hopefully the above explanation has made clear how both seq and pseq have advantages and disadvantages:
seq plays nice with the strictness analyzer, exposing many more potential optimizations, but those optimizations might disrupt the order of evaluation we expect.
pseq preserves the desired evaluation order at all costs, but it only does this by outright lying to the strictness analyzer so it’ll stay off its case, dramatically weakening its ability to help the optimizer do good things.
How do we know which tradeoffs to choose? While we may now understand why seq can sometimes fail to evaluate its first argument before its second, we don’t have any more reason to believe this is an okay thing to let happen.
To soothe your fears, let’s take a step back and think about what’s really happening here. Note that GHC never actually compiles the a `seq` b expression itself in such a way that a is failed to be evaluated before b. Given an expression like a `seq` (b + c), GHC won’t ever secretly stab you in the back and evaluate b + c before evaluating a. Rather, what it does is much more subtle: it might indirectly cause b and c to be individually evaluated before evaluating the overall b + c expression, since the strictness analyzer will note that the overall expression is still strict in both b and c.
How all this fits together is incredibly tricky, and it might make your head spin, so perhaps you don’t find the previous paragraph all that soothing after all. But to make the point more concrete, let’s return to the foldl' example at the beginning of this answer. Recall that it contains an expression like this:
acc' `seq` foldl' f acc' xs
In order to avoid the thunk blowup, we need acc' to be evaluated before the recursive call to foldl'. But given the above reasoning, it still always will be! The difference that seq makes here relative to pseq is, again, only relevant for strictness analysis: it allows GHC to infer that this expression is also strict in f and xs, not just acc', which in this situation doesn’t actually change much at all:
The overall foldl' function still isn’t considered strict in f, since in the first case of the function (the one where xs is []), f is unused, so for some call patterns, foldl' is lazy in f.
foldl' can be considered strict in xs, but that is totally uninteresting here, since xs is only a piece of one of foldl'’s arguments, and that strictness information doesn’t affect the strictness of foldl' at all.
So, if there is not actually any difference here, why not use pseq? Well, suppose foldl' is inlined some finite number of times at a call site, since maybe the shape of its second argument is partially known. The strictness information exposed by seq might then expose several additional optimizations at the call site, leading to a chain of advantageous optimizations. If pseq had been used, those optimizations would be obscured, and GHC would produce worse code.
The real takeaway here is therefore that even though seq might sometimes not evaluate its first argument before its second, this is only technically true, the way it happens is subtle, and it’s pretty unlikely to break your program. This should not be too surprising: seq is the tool the authors of GHC expect programmers to use in this situation, so it would be rather rude of them to make it do the wrong thing! seq is the idiomatic tool for this job, not pseq, so use seq.
When do you use pseq, then? Only when you really, really care about a very specific evaluation order, which usually only happens for one of two reasons: you are using par-based parallelism, or you’re using unsafePerformIO and care about the order of side effects. If you’re not doing either of these things, then don’t use pseq. If all you care about is use cases like foldl', where you just want to avoid needless thunk build-up, use seq. That’s what it’s for.
seq introduces an artificial data dependency between two thunks. Normally, a thunk is forced to evaluate only when pattern-matching demands it. If the thunk a contains the expression case b of { … }, then forcing a also forces b. So there is a dependency between the two: in order to determine the value of a, we must evaluate b.
seq specifies this relationship between any two arbitrary thunks. When seq c d is forced, c is forced in addition to d. Note that I don’t say before: according to the standard, an implementation is free to force c before d or d before c or even some mixture thereof. It’s only required that if c does not halt, then seq c d also doesn’t halt. If you want to guarantee evaluation order, you can use pseq.
The diagrams below illustrate the difference. A black arrowhead (▼) indicates a real data dependency, the kind that you could express using case; a white arrowhead (▽) indicates an artificial dependency.
Forcing seq a b must force both a and b.
│
┌─▼───────┐
│ seq a b │
└─┬─────┬─┘
│ │
┌─▽─┐ ┌─▼─┐
│ a │ │ b │
└───┘ └───┘
Forcing pseq a b must force b, which must first force a.
│
┌─▼────────┐
│ pseq a b │
└─┬────────┘
│
┌─▼─┐
│ b │
└─┬─┘
│
┌─▽─┐
│ a │
└───┘
As it stands, it must be implemented as an intrinsic because its type, forall a b. a -> b -> b, claims that it works for any types a and b, without any constraint. It used to belong to a typeclass, but this was removed and made into a primitive because the typeclass version was considered to have poor ergonomics: adding seq to try to fix a performance issue in a deeply nested chain of function calls would require adding a boilerplate Seq a constraint on every function in the chain. (I would prefer the explicitness, but it would be hard to change now.)
So seq, and syntactic sugar for it like strict fields in data types or BangPatterns in patterns, is about ensuring that something is evaluated by attaching it to the evaluation of something else that will be evaluated. The classic example is foldl'. Here, the seq ensures that when the recursive call is forced, the accumulator is also forced:
foldl' :: (a -> b -> a) -> a -> [b] -> a
foldl' f acc [] = acc
foldl' f acc (x : xs)
= acc' `seq` foldl' f acc' xs
where
acc' = f acc x
That requests of the compiler that if f is strict, such as (+) on a strict data type like Int, then the accumulator is reduced to an Int at each step, rather than building a chain of thunks to be evaluated only at the end.
Real World Haskell is mistaken, and all the other things you quoted are correct. If you care deeply about the evaluation order, use pseq instead.

How to make a partial function?

I was thinking about how I could save myself from undefinition, and one idea I had was to enumerate all possible sources of partiality. At least I would know what of to beware. I found three yet:
Incomplete pattern matches or guards.
Recursion. (Optionally excluding structural recursion on algebraic types.)
If a function is unsafe, any use of that function infects the user code. (Should I be saying "partiality is transitive"?)
I have heard of other ways to obtain a logical contradiction, for instance by using negative types, but I am not sure if anything of that sort applies to Haskell. There are many logical paradoxes out there, and some of them can be encoded in Haskell, but may it be true that any logical paradox requires the use of recursion, and is therefore covered by the point 2 above?
For instance, if it were proven that a Haskell expression free of recursion can always be evaluated to normal form, then the three points I give would be a complete list. I fuzzily remember seeing something like a proof of this in one of Simon Peyton Jones's books, but that was written like 30 years ago, so even if I remember correctly and it used to apply to a prototype Haskell back then, it may be false today, seeing how many a language extension we have. Possibly some of them enable other ways to undefine a program?
And then, if it were so easy to detect expressions that cannot be partial, why do we not do that? How easier would life be!
This is a partial answer (pun intended), where I'll only list a few arguably non obvious ways one can achieve non termination.
First, I'll confirm that negative-recursive types can indeed cause non termination. Indeed, it is known that allowing a recursive type such as
data R a = R (R a -> a)
allows one to define fix, and obtain non termination from there.
{-# LANGUAGE ScopedTypeVariables #-}
{-# OPTIONS -Wall #-}
data R a = R (R a -> a)
selfApply :: R a -> a
selfApply t#(R x) = x t
-- Church's fixed point combinator Y
-- fix f = (\x. f (x x))(\x. f (x x))
fix :: forall a. (a -> a) -> a
fix f = selfApply (R (\x -> f (selfApply x)))
Total languages like Coq or Agda prohibit this by requiring recursive types to use only strictly-positive recursion.
Another potential source of non-termination is that Haskell allows Type :: Type. As far as I can see, that makes it possible to encode System U in Haskell, where Girard's paradox can be used to cause a logical inconsistency, constructing a term of type Void. That term (as far as I understand) would be non terminating.
Girard's paradox is unfortunately rather complex to fully describe, and I have not completely studied it yet. I only know it is related to the so-called hypergame, a game where the first move is to choose a finite game to play. A finite game is one which causes every match to terminate after finitely many moves. The next moves after that would correspond to a match according to the chosen finite game at step one. Here's the paradox: since the chosen game must be finite, no matter what it is, the whole hypergame match will always terminate after a finite amount of moves. This makes hypergame itself a finite game, making the infinite sequence of moves "I choose hypergame, I choose hypergame, ..." a valid play, in turn proving that hypergame is not finite.
Apparently, this argument can be encoded in a rich enough pure type system like System U, and Type :: Type allows to embed the same argument.

Monotonic sequence type in haskell

I would like to define a type for infinite number sequence in haskell. My idea is:
type MySeq = Natural -> Ratio Integer
However, I would also like to be able to define some properties of the sequence on the type level. A simple example would be a non-decreasing sequence like this. Is this possible to do this with current dependent-type capabilities of GHC?
EDIT: I came up with the following idea:
type PositiveSeq = Natural -> Ratio Natural
data IncreasingSeq = IncreasingSeq {
start :: Ratio Natural,
diff :: PositiveSeq}
type IKnowItsIncreasing = [Ratio Natural]
getSeq :: IncreasingSeq -> IKnowItsIncreasing
getSeq s = scanl (+) (start s) [diff s i | i <- [1..]]
Of course, it's basically a hack and not actually type safe at all.
This isn't doing anything very fancy with types, but you could change how you interpret a sequence of naturals to get essentially the same guarantee.
I think you are thinking along the right lines in your edit to the question. Consider
data IncreasingSeq = IncreasingSeq (Integer -> Ratio Natural)
where each ratio represents how much it has increased from the previous number (starting with 0).
Then you can provide a single function
applyToIncreasing :: ([Ratio Natural] -> r) -> IncreasingSeq -> r
applyToIncreasing f (IncreasingSeq s) = f . drop 1 $ scanl (+) 0 (map (s $) [0..])
This should let you deconstruct it in any way, without allowing the function to inspect the real structure.
You just need a way to construct it: probably a fromList that just sorts it and an insert that performs a standard ordered insertion.
It pains part of me to say this, but I don't think you'd gain anything over this using fancy type tricks: there are only three functions that could ever possibly go wrong, and they are fairly simple to correctly implement. The implementation is hidden so anything that uses those is correct as a result of those functions being correct. Just don't export the data constructor for IncreasingSeq.
I would also suggest considering making [Ratio Natural] be the underlying representation. It simplifies things and guarantees that there are no "gaps" in the sequence (so it is guaranteed to be a sequence).
If you want more safety and can take the performance hit, you can use data Nat = Z | S Nat instead of Natural.
I will say that if this was Coq, or a similar language, instead of Haskell I would be more likely to suggest doing some fancier type-level stuff (depending on what you are trying to accomplish) for a couple reasons:
In systems like Coq, you are usually proving theorems about the code. Because of this, it can be useful to have a type-level proof that a certain property holds. Since Haskell doesn't really have a builtin way to prove those sorts of theorems, the utility diminishes.
On the other hand, we can (sometimes) construct data types that essentially must have the properties we want using a small number of trusted functions and a hidden implementation. In the context of a system with more theorem proving capability, like Coq, this might be harder to convince theorem prover of the property than if we used a dependent type (possibly, at least). In Haskell, however, we don't have that issue in the first place.

Is there a way to "preserve" results in Haskell?

I'm new to Haskell and understand that it is (basically) a pure functional language, which has the advantage that results to functions will not change across multiple evaluations. Given this, I'm puzzled by why I can't easily mark a function in such a way that its remembers the results of its first evaluation, and does not have to be evaluated again each time its value is required.
In Mathematica, for example, there is a simple idiom for accomplishing this:
f[x_]:=f[x]= ...
but in Haskell, the closest things I've found is something like
f' = (map f [0 ..] !!)
where f 0 = ...
f n = f' ...
which in addition to being far less clear (and apparently limited to Int arguments?) does not (seem to) preserve results within an interactive session.
Admittedly (and clearly), I don't understand exactly what's going on here; but naively, it seems like Haskel should have some way, at the function definition level, of
taking advantage of the fact that its functions are functions and skipping re-computation of their results once they have been computed, and
indicating a desire to do this at the function definition level with a simple and clean idiom.
Is there a way to accomplish this in Haskell that I'm missing? I understand (sort of) that Haskell can't store the evaluations as "state", but why can't it simply (in effect) redefine evaluated functions to be their computed value?
This grows out of this question, in which lack of this feature results in terrible performance.
Use a suitable library, such as MemoTrie.
import Data.MemoTrie
f' = memo f
where f 0 = ...
f n = f' ...
That's hardly less nice than the Mathematica version, is it?
Regarding
“why can't it simply (in effect) redefine evaluated functions to be their computed value?”
Well, it's not so easy in general. These values have to be stored somewhere. Even for an Int-valued function, you can't just allocate an array with all possible values – it wouldn't fit in memory. The list solution only works because Haskell is lazy and therefore allows infinite lists, but that's not particularly satisfying since lookup is O(n).
For other types it's simply hopeless – you'd need to somehow diagonalise an over-countably infinite domain.
You need some cleverer organisation. I don't know how Mathematica does this, but it probably uses a lot of “proprietary magic”. I wouldn't be so sure that it does really work the way you'd like, for any inputs.
Haskell fortunately has type classes, and these allow you to express exactly what a type needs in order to be quickly memoisable. HasTrie is such a class.

Can compilers deduce/prove mathematically?

I'm starting to learn functional programming language like Haskell, ML and most of the exercises will show off things like:
foldr (+) 0 [ 1 ..10]
which is equivalent to
sum = 0
for( i in [1..10] )
sum += i
So that leads me to think why can't compiler know that this is Arithmetic Progression and use O(1) formula to calculate?
Especially for pure FP languages without side effect?
The same applies for
sum reverse list == sum list
Given a + b = b + a
and definition of reverse, can compilers/languages prove it automatically?
Compilers generally don't try to prove this kind of thing automatically, because it's hard to implement.
As well as adding the logic to the compiler to transform one fragment of code into another, you have to be very careful that it only tries to do it when it's actually safe - i.e. there are often lots of "side conditions" to worry about. For example in your example above, someone might have written an instance of the type class Num (and hence the (+) operator) where the a + b is not b + a.
However, GHC does have rewrite rules which you can add to your own source code and could be used to cover some relatively simple cases like the ones you list above, particularly if you're not too bothered about the side conditions.
For example, and I haven't tested this, you might use the following rule for one of your examples above:
{-# RULES
"sum/reverse" forall list . sum (reverse list) = sum list
#-}
Note the parentheses around reverse list - what you've written in your question actually means (sum reverse) list and wouldn't typecheck.
EDIT:
As you're looking for official sources and pointers to research, I've listed a few.
Obviously it's hard to prove a negative but the fact that no-one has given an example of a general-purpose compiler that does this kind of thing routinely is probably quite strong evidence in itself.
As others have pointed out, even simple arithmetic optimisations are surprisingly dangerous, particularly on floating point numbers, and compilers generally have flags to turn them off - for example Visual C++, gcc. Even integer arithmetic isn't always clear-cut and people occasionally have big arguments about how to deal with things like overflow.
As Joachim noted, integer variables in loops are one place where slightly more sophisticated optimisations are applied because there are actually significant wins to be had. Muchnick's book is probably the best general source on the topic but it's not that cheap. The wikipedia page on strength reduction is probably as good an introduction as any to one of the standard optimisations of this kind, and has some references to the relevant literature.
FFTW is an example of a library that does all kinds of mathematical optimization internally. Some of its code is generated by a customised compiler the authors wrote specifically for the purpose. It's worthwhile because the authors have domain-specific knowledge of optimizations that in the specific context of the library are both worth the effort and safe
People sometimes use template metaprogramming to write "self-optimising libraries" that again might rely on arithmetic identities, see for example Blitz++. Todd Veldhuizen's PhD dissertation has a good overview.
If you descend into the realms of toy and academic compilers all sorts of things go. For example my own PhD dissertation is about writing inefficient functional programs along with little scripts that explain how to optimise them. Many of the examples (see Chapter 6) rely on applying arithmetic rules to justify the underlying optimisations.
Also, it's worth emphasising that the last few examples are of specialised optimisations being applied only to certain parts of the code (e.g. calls to specific libraries) where it is expected to be worthwhile. As other answers have pointed out, it's simply too expensive for a compiler to go searching for all possible places in an entire program where an optimisation might apply. The GHC rewrite rules that I mentioned above are a great example of a compiler exposing a generic mechanism for individual libraries to use in a way that's most appropriate for them.
The answer
No, compilers don’t do that kind of stuff.
One reason why
And for your examples, it would even be wrong: Since you did not give type annotations, the Haskell compiler will infer the most general type, which would be
foldr (+) 0 [ 1 ..10] :: Num a => a
and similar
(\list -> sum (reverse list)) :: Num a => [a] -> a
and the Num instance for the type that is being used might well not fulfil the mathematical laws required for the transformation you suggest. The compiler should, before everything else, avoid to change the meaning (i.e. the semantics) of your program.
More pragmatically: The cases where the compiler could detect such large-scale transformations rarely occur in practice, so it would not be worth it to implement them.
An exception
Note notable exceptions are linear transformations in loops. Most compilers will rewrite
for (int i = 0; i < n; i++) {
... 200 + 4 * i ...
}
to
for (int i = 0, j = 200; i < n; i++, j += 4) {
... j ...
}
or something similar, as that pattern does often occur in code working on array.
The optimizations you have in mind will probably not be done even in the presence of monomorphic types, because there are so many possibilities and so much knowledge required. For example, in this example:
sum list == sum (reverse list)
The compiler would need to know or take into account the following facts:
sum = foldl (+) 0
(+) is commutative
reverse list is a permutation of list
foldl x c l, where x is commutative and c is a constant, yields the same result for all permutations of l.
This all seems trivial. Sure, the compiler can most probably look up the definition of sumand inline it. It could be required that (+) be commutative, but remember that +is just another symbol without attached meaning to the compiler. The third point would require the compiler to prove some non trivial properties about reverse.
But the point is:
You don't want to perform the compiler to do those calculations with each and every expression. Remember, to make this really useful, you'd have to heap up a lot of knowledge about many, many standard functions and operators.
You still can't replace the expression above with True unless you can rule out the possibility that list or some list element is bottom. Usually, one cannot do this. You can't even do the following "trivial" optimization of f x == f x in all cases
f x `seq` True
For, consider
f x = (undefined :: Bool, x)
then
f x `seq` True ==> True
f x == f x ==> undefined
That being said, regarding your first example slightly modified for monomorphism:
f n = n * foldl (+) 0 [1..10] :: Int
it is imaginable to optimize the program by moving the expression out of its context and replace it with the name of a constant, like so:
const1 = foldl (+) 0 [1..10] :: Int
f n = n * const1
This is because the compiler can see that the expression must be constant.
What you're describing looks like super-compilation. In your case, if the expression had a monomorphic type like Int (as opposed to polymorphic Num a => a), the compiler could infer that the expression foldr (+) 0 [1 ..10] has no external dependencies, therefore it could be evaluated at compile time and replaced by 55. However, AFAIK no mainstream compiler currently does this kind of optimization.
(In functional programming "proving" is usually associated with something different. In languages with dependent types types are powerful enough to express complex proposition and then through the Curry-Howard correspondence programs become proofs of such propositions.)
As others have noted, it's unclear that your simplifications even hold in Haskell. For instance, I can define
newtype NInt = N Int
instance Num NInt where
N a + _ = N a
N b * _ = N b
... -- etc
and now sum . reverse :: Num [a] -> a does not equal sum :: Num [a] -> a since I can specialize each to [NInt] -> NInt where sum . reverse == sum clearly does not hold.
This is one general tension that exists around optimizing "complex" operations—you actually need quite a lot of information in order to successfully prove that it's okay to optimize something. This is why the syntax-level compiler optimization which do exist are usually monomorphic and related to the structure of programs---it's usually such a simplified domain that there's "no way" for the optimization to go wrong. Even that is often unsafe because the domain is never quite so simplified and well-known to the compiler.
As an example, a very popular "high-level" syntactic optimization is stream fusion. In this case the compiler is given enough information to know that stream fusion can occur and is basically safe, but even in this canonical example we have to skirt around notions of non-termination.
So what does it take to have \x -> sum [0..x] get replaced by \x -> x*(x + 1)/2? The compiler would need a theory of numbers and algebra built-in. This is not possible in Haskell or ML, but becomes possible in dependently typed languages like Coq, Agda, or Idris. There you could specify things like
revCommute :: (_+_ :: a -> a -> a)
-> Commutative _+_
-> foldr _+_ z (reverse as) == foldr _+_ z as
and then, theoretically, tell the compiler to rewrite according to revCommute. This would still be difficult and finicky, but at least we'd have enough information around. To be clear, I'm writing something very strange above, a dependent type. The type not only depends on the ability to introduce both a type and a name for the argument inline, but also the existence of the entire syntax of your language "at the type level".
There are a lot of differences between what I just wrote and what you'd do in Haskell, though. First, in order to form a basis where such promises can be taken seriously, we must throw away general recursion (and thus we already don't have to worry about questions of non-termination like stream-fusion does). We also must have enough structure around to create something like the promise Commutative _+_---this likely depends upon there being an entire theory of operators and mathematics built into the language's standard library else you would need to create that yourself. Finally, the richness of type system required to even express these kinds of theories adds a lot of complexity to the entire system and tosses out type inference as you know it today.
But, given all that structure, I'd never be able to create an obligation Commutative _+_ for the _+_ defined to work on NInts and so we could be certain that foldr (+) 0 . reverse == foldr (+) 0 actually does hold.
But now we'd need to tell the compiler how to actually perform that optimization. For stream-fusion, the compiler rules only kick in when we write something in exactly the right syntactic form to be "clearly" an optimization redex. The same kinds of restrictions would apply to our sum . reverse rule. In fact, already we're sunk because
foldr (+) 0 . reverse
foldr (+) 0 (reverse as)
don't match. They're "obviously" the same due to some rules we could prove about (.), but that means that now the compiler must invoke two built-in rules in order to perform our optimization.
At the end of the day, you need a very smart optimization search over the sets of known laws in order to achieve the kinds of automatic optimizations you're talking about.
So not only do we add a lot of complexity to the entire system, require a lot of base work to build-in some useful algebraic theories, and lose Turing completeness (which might not be the worst thing), we also only get a finicky promise that our rule would even fire unless we perform an exponentially painful search during compilation.
Blech.
The compromise that exists today tends to be that sometimes we have enough control over what's being written to be mostly certain that a certain obvious optimization can be performed. This is the regime of stream fusion and it requires a lot of hidden types, carefully written proofs, exploitations of parametricity, and hand-waving before it's something the community trusts enough to run on their code.
And it doesn't even always fire. For an example of battling that problem take a look at the source of Vector for all of the RULES pragmas that specify all of the common circumstances where Vector's stream-fusion optimizations should kick in.
All of this is not at all a critique of compiler optimizations or dependent type theories. Both are really incredible. Instead it's just an amplification of the tradeoffs involved in introducing such an optimization. It's not to be done lightly.
Fun fact: Given two arbitrary formulas, do they both give the same output for the same inputs? The answer to this trivial question is not computable! In other words, it is mathematically impossible to write a computer program that always gives the correct answer in finite time.
Given this fact, it's perhaps not surprising that nobody has a compiler that can magically transform every possible computation into its most efficient form.
Also, isn't this the programmer's job? If you want the sum of an arithmetic sequence commonly enough that it's a performance bottleneck, why not just write some more efficient code yourself? Similarly, if you really want Fibonacci numbers (why?), use the O(1) algorithm.

Resources