How does non-strict and lazy differ? - haskell

I often read that lazy is not the same as non-strict but I find it hard to understand the difference. They seem to be used interchangeably but I understand that they have different meanings. I would appreciate some help understanding the difference.
I have a few questions which are scattered about this post. I will summarize those questions at the end of this post. I have a few example snippets, I did not test them, I only presented them as concepts. I have added quotes to save you from looking them up. Maybe it will help someone later on with the same question.
Non-Strict Def:
A function f is said to be strict if, when applied to a nonterminating
expression, it also fails to terminate. In other words, f is strict
iff the value of f bot is |. For most programming languages, all
functions are strict. But this is not so in Haskell. As a simple
example, consider const1, the constant 1 function, defined by:
const1 x = 1
The value of const1 bot in Haskell is 1. Operationally speaking, since
const1 does not "need" the value of its argument, it never attempts to
evaluate it, and thus never gets caught in a nonterminating
computation. For this reason, non-strict functions are also called
"lazy functions", and are said to evaluate their arguments "lazily",
or "by need".
-A Gentle Introduction To Haskell: Functions
I really like this definition. It seems the best one I could find for understanding strict. Is const1 x = 1 lazy as well?
Non-strictness means that reduction (the mathematical term for
evaluation) proceeds from the outside in,
so if you have (a+(bc)) then first you reduce the +, then you reduce
the inner (bc).
-Haskell Wiki: Lazy vs non-strict
The Haskell Wiki really confuses me. I understand what they are saying about order but I fail to see how (a+(b*c)) would evaluate non-strictly if it was pass _|_?
In non-strict evaluation, arguments to a function are not evaluated
unless they are actually used in the evaluation of the function body.
Under Church encoding, lazy evaluation of operators maps to non-strict
evaluation of functions; for this reason, non-strict evaluation is
often referred to as "lazy". Boolean expressions in many languages use
a form of non-strict evaluation called short-circuit evaluation, where
evaluation returns as soon as it can be determined that an unambiguous
Boolean will result — for example, in a disjunctive expression where
true is encountered, or in a conjunctive expression where false is
encountered, and so forth. Conditional expressions also usually use
lazy evaluation, where evaluation returns as soon as an unambiguous
branch will result.
-Wikipedia: Evaluation Strategy
Lazy Def:
Lazy evaluation, on the other hand, means only evaluating an
expression when its results are needed (note the shift from
"reduction" to "evaluation"). So when the evaluation engine sees an
expression it builds a thunk data structure containing whatever values
are needed to evaluate the expression, plus a pointer to the
expression itself. When the result is actually needed the evaluation
engine calls the expression and then replaces the thunk with the
result for future reference.
...
Obviously there is a strong correspondence between a thunk and a
partly-evaluated expression. Hence in most cases the terms "lazy" and
"non-strict" are synonyms. But not quite.
-Haskell Wiki: Lazy vs non-strict
This seems like a Haskell specific answer. I take that lazy means thunks and non-strict means partial evaluation. Is that comparison too simplified? Does lazy always mean thunks and non-strict always mean partial evaluation.
In programming language theory, lazy evaluation or call-by-need1 is
an evaluation strategy which delays the evaluation of an expression
until its value is actually required (non-strict evaluation) and also
avoid repeated evaluations (sharing).
-Wikipedia: Lazy Evaluation
Imperative Examples
I know most people say forget imperative programming when learning a functional language. However, I would like to know if these qualify as non-strict, lazy, both or neither? At the very least it would provide something familiar.
Short Circuiting
f1() || f2()
C#, Python and other languages with "yield"
public static IEnumerable Power(int number, int exponent)
{
int counter = 0;
int result = 1;
while (counter++ < exponent)
{
result = result * number;
yield return result;
}
}
-MSDN: yield (c#)
Callbacks
int f1() { return 1;}
int f2() { return 2;}
int lazy(int (*cb1)(), int (*cb2)() , int x) {
if (x == 0)
return cb1();
else
return cb2();
}
int eager(int e1, int e2, int x) {
if (x == 0)
return e1;
else
return e2;
}
lazy(f1, f2, x);
eager(f1(), f2(), x);
Questions
I know the answer is right in front of me with all those resources, but I can't grasp it. It all seems like the definition is too easily dismissed as implied or obvious.
I know I have a lot of questions. Feel free to answer whatever questions you feel are relevant. I added those questions for discussion.
Is const1 x = 1 also lazy?
How is evaluating from "inward" non-strict? Is it because inward allows reductions of unnecessary expressions, like in const1 x = 1? Reductions seem to fit the definition of lazy.
Does lazy always mean thunks and non-strict always mean partial evaluation? Is this just a generalization?
Are the following imperative concepts Lazy, Non-Strict, Both or Neither?
Short Circuiting
Using yield
Passing Callbacks to delay or avoid execution
Is lazy a subset of non-strict or vice versa, or are they mutually exclusive. For example is it possible to be non-strict without being lazy, or lazy without being non-strict?
Is Haskell's non-strictness achieved by laziness?
Thank you SO!

Non-strict and lazy, while informally interchangeable, apply to different domains of discussion.
Non-strict refers to semantics: the mathematical meaning of an expression. The world to which non-strict applies has no concept of the running time of a function, memory consumption, or even a computer. It simply talks about what kinds of values in the domain map to which kinds of values in the codomain. In particular, a strict function must map the value ⊥ ("bottom" -- see the semantics link above for more about this) to ⊥; a non strict function is allowed not to do this.
Lazy refers to operational behavior: the way code is executed on a real computer. Most programmers think of programs operationally, so this is probably what you are thinking. Lazy evaluation refers to implementation using thunks -- pointers to code which are replaced with a value the first time they are executed. Notice the non-semantic words here: "pointer", "first time", "executed".
Lazy evaluation gives rise to non-strict semantics, which is why the concepts seem so close together. But as FUZxxl points out, laziness is not the only way to implement non-strict semantics.
If you are interested in learning more about this distinction, I highly recommend the link above. Reading it was a turning point in my conception of the meaning of computer programs.

An example for an evaluation model, that is neither strict nor lazy: optimistic evaluation, which gives some speedup as it can avoid a lot of "easy" thunks:
Optimistic evaluation means that even if a subexpression may not be needed to evaluate the superexpression, we still evaluate some of it using some heuristics. If the subexpression doesn't terminate quickly enough, we suspend its evaluation until it's really needed. This gives us an advantage over lazy evaluation if the subexpression is needed later, as we don't need to generate a thunk. On the other hand, we don't lose too much if the expression doesn't terminate, as we can abort it quickly enough.
As you can see, this evaluation model is not strict: If something that yields _|_ is evaluated, but not needed, the function will still terminate, as the engine aborts the evaluation. On the other hand, it may be possible that more expressions than needed are evaluated, so it's not completely lazy.

Yes, there is some unclear use of terminology here, but the terms coincide in most cases regardless, so it's not too much of a problem.
One major difference is when terms are evaluated. There are multiple strategies for this, ranging on a spectrum from "as soon as possible" to "only at the last moment". The term eager evaluation is sometimes used for strategies leaning toward the former, while lazy evaluation properly refers to a family of strategies leaning heavily toward the latter. The distinction between "lazy evaluation" and related strategies tend to involve when and where the result of evaluating something is retained, vs. being tossed aside. The familiar memoization technique in Haskell of assigning a name to a data structure and indexing into it is based on this. In contrast, a language that simply spliced expressions into each other (as in "call-by-name" evaluation) might not support this.
The other difference is which terms are evaluated, ranging from "absolutely everything" to "as little as possible". Since any value actually used to compute the final result can't be ignored, the difference here is how many superfluous terms are evaluated. As well as reducing the amount of work the program has to do, ignoring unused terms means that any errors they would have generated won't occur. When a distinction is being drawn, strictness refers to the property of evaluating everything under consideration (in the case of a strict function, for instance, this means the terms it's applied to. It doesn't necessarily mean sub-expressions inside the arguments), while non-strict means evaluating only some things (either by delaying evaluation, or by discarding terms entirely).
It should be easy to see how these interact in complicated ways; decisions are not at all orthogonal, as the extremes tend to be incompatible. For instance:
Very non-strict evaluation precludes some amount of eagerness; if you don't know whether a term will be needed, you can't evaluate it yet.
Very strict evaluation makes non-eagerness somewhat irrelevant; if you're evaluating everything, the decision of when to do so is less significant.
Alternate definitions do exist, though. For instance, at least in Haskell, a "strict function" is often defined as one that forces its arguments sufficiently that the function will evaluate to _|_ ("bottom") whenever any argument does; note that by this definition, id is strict (in a trivial sense), because forcing the result of id x will have exactly the same behavior as forcing x alone.

This started out as an update but it started to get long.
Laziness / Call-by-need is a memoized version of call-by-name where, if the function argument is evaluated, that value is stored for subsequent uses. In a "pure" (effect-free) setting, this produces the same results as call-by-name; when the function argument is used two or more times, call-by-need is almost always faster.
Imperative Example - Apparently this is possible. There is an interesting article on Lazy Imperative Languages. It says there are two methods. One requires closures the second uses graph reductions. Since C does not support closures you would need to explicitly pass an argument to your iterator. You could wrap a map structure and if the value does not exist calculate it otherwise return value.
Note: Haskell implements this by "pointers to code which are replaced with a value the first time they are executed" - luqui.
This is non-strict call-by-name but with sharing/memorization of the results.
Call-By-Name - In call-by-name evaluation, the arguments to a function are not evaluated before the function is called — rather, they are substituted directly into the function body (using capture-avoiding substitution) and then left to be evaluated whenever they appear in the function. If an argument is not used in the function body, the argument is never evaluated; if it is used several times, it is re-evaluated each time it appears.
Imperative Example: callbacks
Note: This is non-strict as it avoids evaluation if not used.
Non-Strict = In non-strict evaluation, arguments to a function are not evaluated unless they are actually used in the evaluation of the function body.
Imperative Example: short-circuiting
Note: _|_ appears to be a way to test if a function is non-strict
So a function can be non-strict but not lazy. A function that is lazy is always non-strict.
Call-By-Need is partly defined by Call-By-Name which is partly defined by Non-Strict
An Excerpt from "Lazy Imperative Languages"
2.1. NON-STRICT SEMANTICS VS. LAZY EVALUATION We must first clarify
the distinction between "non-strict semantics" and "lazy evaluation".
Non-strictsemantics are those which specify that an expression is not
evaluated until it is needed by a primitiveoperation. There may be
various types of non-strict semantics. For instance, non-strict
procedure calls donot evaluate the arguments until their values are
required. Data constructors may have non-strictsemantics, in which
compound data are assembled out of unevaluated pieces Lazy evaluation,
also called delayed evaluation, is the technique normally used to
implement non-strictsemantics. In section 4, the two methods commonly
used to implement lazy evaluation are very brieflysummarized.
CALL BY VALUE, CALL BY LAZY, AND CALL BY NAME "Call by value" is the
general name used for procedure calls with strict semantics. In call
by valuelanguages, each argument to a procedure call is evaluated
before the procedure call is made; the value isthen passed to the
procedure or enclosing expression. Another name for call by value is
"eager" evaluation.Call by value is also known as "applicative order"
evaluation, because all arguments are evaluated beforethe function is
applied to them."Call by lazy" (using William Clinger's terminology in
[8]) is the name given to procedure calls which usenon-strict
semantics. In languages with call by lazy procedure calls, the
arguments are not evaluatedbefore being substituted into the procedure
body. Call by lazy evaluation is also known as "normal
order"evaluation, because of the order (outermost to innermost, left
to right) of evaluation of an expression."Call by name" is a
particular implementation of call by lazy, used in Algol-60 [18]. The
designers ofAlgol-60 intended that call-by-name parameters be
physically substituted into the procedure body, enclosedby parentheses
and with suitable name changes to avoid conflicts, before the body was
evaluated.
CALL BY LAZY VS. CALL BY NEED Call by need is an
extension of call by lazy, prompted by the observation that a lazy
evaluation could beoptimized by remembering the value of a given
delayed expression, once forced, so that the value need notbe
recalculated if it is needed again. Call by need evaluation,
therefore, extends call by lazy methods byusing memoization to avoid
the need for repeated evaluation. Friedman and Wise were among the
earliestadvocates of call by need evaluation, proposing "suicidal
suspensions" which self-destructed when theywere first evaluated,
replacing themselves with their values.

The way I understand it, "non-strict" means trying to reduce workload by reaching completion in a lower amount of work.
Whereas "lazy evaluation" and similar try to reduce overall workload by avoiding full completion (hopefully forever).
from your examples...
f1() || f2()
...short circuiting from this expression would not possibly result in triggering 'future work', and there's neither a speculative/amortized factor to the reasoning, nor any computational complexity debt accruing.
Whereas in the C# example, "lazy" conserves a function call in the overall view, but in exchange, comes with those above sorts of difficulty (at least from point of call until possible full completion... in this code that's a negligible-distance pathway, but imagine those functions had some high-contention locks to put up with).
int f1() { return 1;}
int f2() { return 2;}
int lazy(int (*cb1)(), int (*cb2)() , int x) {
if (x == 0)
return cb1();
else
return cb2();
}
int eager(int e1, int e2, int x) {
if (x == 0)
return e1;
else
return e2;
}
lazy(f1, f2, x);
eager(f1(), f2(), x);

If we're talking general computer science jargon, then "lazy" and "non-strict" are generally synonyms -- they stand for the same overall idea, which expresses itself in different ways for different situations.
However, in a given particular specialized context, they may have acquired differing technical meanings. I don't think you can say anything accurate and universal about what the difference between "lazy" and "non-strict" is likely to be in the situation where there is a difference to make.

Related

Does Prolog use lazy evaluation like Haskell? [duplicate]

Because Prolog uses chronological backtracking(from the Prolog Wikipedia page) even after an answer is found(in this example where there can only be one solution), would this justify Prolog as using eager evaluation?
mother_child(trude, sally).
father_child(tom, sally).
father_child(tom, erica).
father_child(mike, tom).
sibling(X, Y) :- parent_child(Z, X), parent_child(Z, Y).
parent_child(X, Y) :- father_child(X, Y).
parent_child(X, Y) :- mother_child(X, Y).
With the following output:
?- sibling(sally, erica).
true ;
false.
To summarize the discussion with #WillNess below, yes, Prolog is strict. However, Prolog's execution model and semantics are substantially different from the languages that are usually labelled strict or non-strict. For more about this, see below.
I'm not sure the question really applies to Prolog, because it doesn't really have the kind of implicit evaluation ordering that other languages have. Where this really comes into play in a language like Haskell, you might have an expression like:
f (g x) (h y)
In a strict language like ML, there is a defined evaluation order: g x will be evaluated, then h y, and f (g x) (h y) last. In a language like Haskell, g x and h y will only be evaluated as required ("non-strict" is more accurate than "lazy"). But in Prolog,
f(g(X), h(Y))
does not have the same meaning, because it isn't using a function notation. The query would be broken down into three parts, g(X, A), h(Y, B), and f(A,B,C), and those constituents can be placed in any order. The evaluation strategy is strict in the sense that what comes earlier in a sequence will be evaluated before what comes next, but it is non-strict in the sense that there is no requirement that variables be instantiated to ground terms before evaluation can proceed. Unification is perfectly content to complete without having given you values for every variable. I am bringing this up because you have to break down a complex, nested expression in another language into several expressions in Prolog.
Backtracking has nothing to do with it, as far as I can tell. I don't think backtracking to the nearest choice point and resuming from there precludes a non-strict evaluation method, it just happens that Prolog's is strict.
That Prolog pauses after giving each of the several correct answers to a problem has nothing to do with laziness; it is a part of its user interaction protocol. Each answer is calculated eagerly.
Sometimes there will be only one answer but Prolog doesn't know that in advance, so it waits for us to press ; to continue search, in hopes of finding another solution. Sometimes it is able to deduce it in advance and will just stop right away, but only sometimes.
update:
Prolog does no evaluation on its own. All terms are unevaluated, as if "quoted" in Lisp.
Prolog will unfold your predicate definitions as written and is perfectly happy to keep your data structures full of unevaluated uninstantiated holes, if so entailed by your predicate definitions.
Haskell does not need any values, a user does, when requesting an output.
Similarly, Prolog produces solutions one-by-one, as per the user requests.
Prolog can even be seen to be lazier than Haskell where all arithmetic is strict, i.e. immediate, whereas in Prolog you have to explicitly request the arithmetic evaluation, with is/2.
So perhaps the question is ill-posed. Prolog's operations model is just too different. There are no "results" nor "functions", for one; but viewed from another angle, everything is a result, and predicates are "multi"-functions.
As it stands, the question is not correct in what it states. Chronological backtracking does not mean that Prolog will necessarily backtrack "in an example where there can be only one solution".
Consider this:
foo(a, 1).
foo(b, 2).
foo(c, 3).
?- foo(b, X).
X = 2.
?- foo(X, 2).
X = b.
So this is an example that does have only one solution and Prolog recognizes that, and does not attempt to backtrack. There are cases in which you can implement a solution to a problem in a way that Prolog will not recognize that there is only one logical solution, but this is due to the implementation and is not inherent to Prolog's execution model.
You should read up on Prolog's execution model. From the Wikipedia article which you seem to cite, "Operationally, Prolog's execution strategy can be thought of as a generalization of function calls in other languages, one difference being that multiple clause heads can match a given call. In that case, [emphasis mine] the system creates a choice-point, unifies the goal with the clause head of the first alternative, and continues with the goals of that first alternative." Read Sterling and Shapiro's "The Art of Prolog" for a far more complete discussion of the subject.
from Wikipedia I got
In eager evaluation, an expression is evaluated as soon as it is bound to a variable.
Then I think there are 2 levels - at user level (our predicates) Prolog is not eager.
But it is at 'system' level, because variables are implemented as efficiently as possible.
Indeed, attributed variables are implemented to be lazy, and are rather 'orthogonal' to 'logic' Prolog variables.

Can every functional language be lazy?

In a functional language, functions are first class citizens and thus calling them is not the only thing I can do. I can also store them away.
Now when I have a language, which is strict by default, then I am still not forced to evaluate a function call. I have the option to store the function and its parameters e.g. in a tuple for later evaluation.
So instead of
x = f a b c
I do something like
x = (f,a,b,c)
And later, I can evaluate this thing with something like
eval (f,a,b,c) = f a b c
Well, there is probably more to it, because I want to evaluate each unevaluated function call only once, but it seems to me, that this can also be solved with a data structure which is a bit fancier than just a tuple.
The inverse also seems to be the case, because e.g. in Haskell, which is lazy be default I can enforce evaluation with seq or BangPatterns.
So is it correct to say, that every functional language has the potential of being lazy, but most of them are just not lazy by default and thus require additional programming effort to call a function lazily, whereas haskell is lazy by default and requires additional programming effort to call a function in a strict way?
Should that be the case, what is more difficult for the programmer: writing lazy function calls in a strict language or writing strict function calls in a lazy language?
As a side note: was Simon P. Jone serious when he said: "the next version of haskell will be strict". I first thought that this was a joke. But now I think strict by default isn't all that bad as long as you can be lazy if required.
Lazy evaluation, at the low level, is implemented by a concept called a thunk, which comprises two things:
A closure that computes the value of the deferred computation
A set-once mutable storage cell for memoizing the result.
The first part, the closure, can be modeled in an even simpler way than your tuple with the function and its arguments. You can just use a function that accepts unit or no arguments (depending on how your language works), and in its body you apply the function to the arguments. To compute the result, you just invoke the function.
Paul Johnson mentions Scheme, which is a perfect language to demonstrate this. As a Scheme macro (pseudocode, untested):
(define-syntax delay
(syntax-rules ()
((delay expr ...)
;; `(delay expr)` evaluates to a lambda that closes over
;; two variables—one to store the result, one to record
;; whether the thunk has been forced.
(let ((value #f)
(done? #f))
(lambda ()
(unless done?
(set! value (begin expr ...))
(set! done? #t))
value)))))
(define (force thunk)
;; Thunks are procedures, so to force them you just invoke them.
(thunk))
But to get this back to the title of the question: does this mean that every functional language can be lazy? The answer is no. Eager languages can implement thunking and use it to provide opt-in delayed evaluation at user-selected spots, but this isn't the same as having pervasive lazy evaluation like Haskell implementations provide.
The answer is a qualified yes. Your intuition that laziness can be implemented in a strict language where functions are first-class objects is correct. But going into the details reveals a number of subtleties.
Let's take a functional language (by which I mean a language where functions can be constructed and manipulated as first-class objects, like in the lambda calculus), where function application is strict (i.e. the function¹ and its argument(s) are fully evaluated before the function is applied). I'll use the syntax of Standard ML, since this is a popular and historically important strict functional language. A strict application F A (where F and A are two expressions) can be delayed by encoding it as
Thunk (F, A)
This object contains a function and an argument is called a thunk. We can define a type of thunks:
datatype ('a, 'b) thunk = Thunk of ('a -> 'b) * 'a;
and a function to evaluate a thunk:
fun evaluate (Thunk (f, x)) = f x;
Nice and easy so far. But we have not, in fact, implemented lazy evaluation! What we've implemented is normal-order evaluation, also known as call-by-name. The difference is that if the value of the thunk is used more than once, it is calculated every time. Lazy evaluation (also known as call-by-need) requires evaluating the expression at most once.
In a pure, strict language, lazy evaluation is in fact impossible to implement. The reason is that evaluating a thunk modifies the state of the system: it goes from unevaluated to evaluated. Implementing lazy evaluation requires a way to change the state of the system.
There's a subtlety here: if the semantics of the language is defined purely in terms of the termination status of expressions and the value of terminating expressions, then, in a pure language, call-by-need and call-by-name are indistinguishable. Call-by-value (i.e. strict evaluation) is distinguishable because fewer expressions terminate — call-by-need and call-by-name hide any non-termination that happens in a thunk that is never evaluated. The equivalence of call-by-need and call-by-name allows lazy evaluation to be considered as an optimization of normal-order evaluation (which has nice theoretical properties). But in many programs, using call-by-name instead of call-by-value would blow up the running time by computing the value of the same expressions over and over again.
In a language with mutable state, lazy evaluation can be expressed by storing the value into the thunk when it is calculated.
datatype ('a, 'b) lazy_state = Lazy of ('a -> 'b) * 'a | Value of 'a;
type ('a, 'b) lazy_state = ('a, 'b) lazy_state ref;
let lazy (f, x) = ref (Lazy (f, x));
fun force r =
case !r of Value y => y
| Lazy (f, x) => let val y = f x in r := Value y; y end;
This code is not very complicated, so even in ML dialects that provide lazy evaluation as a library feature (possibly with syntactic sugar), it isn't used all that often in practice — often, the point at which the value will be needed is a known location in the programs, and programmers just use a function and pass it its argument at that location.
While this is getting into subjective territory, I would say that it's much easier to implement lazy evaluation like this, than to implement strict evaluation in a language like Haskell. Forcing strict evaluation in Haskell is basically impossible (except by wrapping everything into a state monad and essentially writing ML code in Haskell syntax). Of course, strict evaluation doesn't change the values calculated by the program, but it can have a significant impact on performance (and, in particular, it is sometimes much appreciated because it makes performance a lot more predictable — predicting the performance of a Haskell program can be very hard).
This evaluate-and-store mechanism is effectively the core of what a Haskell compiler does under the hood. Haskell is pure², so you cannot implemente this in the language itself! However, it's sound for the compiler to do it under the hood, because this particular use of side effects does not break the purity of the language, so it does not invalidate any program transformation. The reason storing the value of a thunk is sound is that it turns call-by-name evaluation into call-by-need, and as we saw above, this neither changes the values of terminating expressions, nor changes which expressions terminate.
This approach can be somewhat problematic in a language that combines purely functional local evaluation with a multithreaded environment and message passing between threads. (This is notably the model of Erlang.) If one thread starts evaluating a thunk, and another thread needs its value just then, what is going to happen? If no precautions are taken, then both threads will calculate the value and store it. In a pure language, this is harmless in the sense that both threads will calculate the same value anyway³. However this can hurt performance. To ensure that a thunk is evaluated only once, the calculation of the value must be wrapped in a lock; this helps with long calculations that are performed many times but hurts short calculations that are performed only once, as taking and releasing a lock takes some time.
¹ The function, not the function body of course.
² Or rather, the fragment of Haskell that doesn't use a side effect monad is pure.
³ It is necessary for the transition between a delayed thunk and a computed value to be atomic — concurrent threads must be able to read a lazy value and get either a valid delayed thunk or a valid computed value, not some mixture of the two that isn't a valid object. At the processor level, the transition from delayed thunk to computed value is usually a pointer assignment, which on most architectures is atomic, fortunately.
What you propose will work. The details for doing this in Scheme can be found in SICP. One could envisage a strict version of Haskell in which there is a "lazy" function which does the opposite of what "seq" does in Haskell. However adding this to a strict Haskell-like language would require compiler magic because otherwise the thunk gets forced before being passed to "lazy".
However if your language has uncontrolled effects then this can get hairy, because an effect happens whenever its enclosing function gets evaluated, and figuring out when figuring out when that is going to happen in a lazy langauge is difficult. Thats why Haskell has the IO monad.

Haskell pattern match "diverge" and ⊥

I'm trying to understand the Haskell 2010 Report section 3.17.2 "Informal Semantics of Pattern Matching". Most of it, relating to a pattern match succeeding or failing seems straightforward, however I'm having difficulty understanding the case which is described as the pattern match "diverging".
I'm semi-persuaded it means that the match algorithm does not "converge" to an answer (hence the match function never returns). But if doesn't return, then, how can it return a value, as suggested by the parenthetical "i.e. return ⊥"? And what does it mean to "return ⊥" anyway? How one handle that outcome?
Item 5 has the particularly confusing (to me) point "If the value is ⊥, the match diverges". Is this just saying that a value of ⊥ produces a match result of ⊥? (Setting aside that I don't know what that outcome means!)
Any illumination, possibly with an example, would be appreciated!
Addendum after a couple of lengthy answers:
Thanks Tikhon and all for your efforts.
It seems my confusion comes from there being two different realms of explanation: The realm of Haskell features and behaviors, and the realm of mathematics/semantics, and in Haskell literature these two are intermingled in an attempt to explain the former in terms of the latter, without sufficient signposts (to me) as to which elements belong to which.
Evidently "bottom" ⊥ is in the semantics domain, and does not exist as a value within Haskell (ie: you can't type it in, you never get a result that prints out as " ⊥").
So, where the explanation says a function "returns ⊥", this refers to a function that does any of a number of inconvenient things, like not terminate, throw an exception, or return "undefined". Is that right?
Further, those who commented that ⊥ actually is a value that can be passed around, are really thinking of bindings to ordinary functions that haven't yet actually been called upon to evaluate ("unexploded bombs" so to speak) and might never, due to laziness, right?
The value is ⊥, usually pronounced "bottom". It is a value in the semantic sense--it is not a normal Haskell value per se. It represents computations that do not produce a normal Haskell value: exceptions and infinite loops, for example.
Semantics is about defining the "meaning" of a program. In Haskell, we usually talk about denotational semantics, where the value is a mathematical object of some sort. The most trivial example would be that the expression 10 (but also the expression 9 + 1) have denotations of the number 10 (rather than the Haskell value 10). We usually write that ⟦9 + 1⟧ = 10 meaning that the denotation of the Haskell expression 9 + 1 is the number 10.
However, what do we do with an expression like let x = x in x? There is no Haskell value for this expression. If you tried to evaluate it, it would simply never finish. Moreover, it is not obvious what mathematical object this corresponds to. However, in order to reason about programs, we need to give some denotation for it. So, essentially, we just make up a value for all these computations, and we call the value ⊥ (bottom).
So ⊥ is just a way to define what a computation that doesn't return "means".
We also define other computations like undefined and error "some message" as ⊥ because they also do not have obvious normal values. So throwing an exception corresponds to ⊥. This is exactly what happens with a failed pattern match.
The usual way of thinking about this is that every Haskell type is "lifted"--it contains ⊥. That is, Bool corresponds to {⊥, True, False} rather than just {True, False}. This represents the fact that Haskell programs are not guaranteed to terminate and can have exceptions. This is also true when you define your own type--the type contains every value you defined for it as well as ⊥.
Interestingly, since Haskell is non-strict, ⊥ can exist in normal code. So you could have a value like Just ⊥, and if you never evaluate it, everything will work fine. A good example of this is const: const 1 ⊥ evaluates to 1. This works for failed pattern matches as well:
const 1 (let Just x = Nothing in x) -- 1
You should read the section on denotational semantics in the Haskell WikiBook. It's a very approachable introduction to the subject, which I personally find very fascinating.
Denotational semantics
So, briefly denotational semantics, which is where ⊥ lives, is a mapping from Haskell values to some other space of values. You do this to give meaning to programs in a more formal manner than just talking about what programs should do—you say that they must respect their denotational semantics.
So for Haskell, you often think about how Haskell expressions denote mathematical values. You often see Strachey brackets ⟦·⟧ to denote the "semantic mapping" from Haskell to Math. Finally, we want our semantic brackets to be compatible with semantic operations. For instance
⟦x + y⟧ = ⟦x⟧ + ⟦y⟧
where on the left side + is the Haskell function (+) :: Num a => a -> a -> a and on the right side it's the binary operation in a commutative group. While is cool, because then we know that we can use the properties from the semantic map to know how our Haskell functions should work. To wit, let's write the commutative property "in Math"
⟦x⟧ + ⟦y⟧ == ⟦y⟧ + ⟦x⟧
= ⟦x + y⟧ == ⟦y + x⟧
= ⟦x + y == y + x⟧
where the third step also indicates that the Haskell (==) :: Eq a => a -> a -> a ought to have the properties of a mathematical equivalence relationship.
Well, except...
Anyway, that's all well and good until we remember that computers are finite things and Maths don't much care about that (unless you're using intuitionistic logic, and then you get Coq). So, we have to take note of places where our semantics don't follow Math quite right. Here are three examples
⟦undefined⟧ = ??
⟦error "undefined"⟧ = ??
⟦let x = x in x⟧ = ??
This is where ⊥ comes into play. We just assert that so far as the denotational semantics of Haskell are concerned each of those examples might as well mean (the newly introduced Mathematical/semantic concept of) ⊥. What are the Mathematical properties of ⊥? Well, this is where we start to really dive into what the semantic domain is and start talking about monotonicity of functions and CPOs and the like. Essentially, though, ⊥ is a mathematical object which plays roughly the same game as non-termination does. To the point of view of the semantic model, ⊥ is toxic and it infects expressions with its toxic-nondeterminism.
But it's not a Haskell-the-language concept, just a Semantic-domain-of-the-language-Haskell thing. In Haskell we have undefined, error and infinite looping. This is important.
Extra-semantic behavior (side note)
So the semantics of ⟦undefined⟧ = ⟦error "undefined"⟧ = ⟦let x = x in x⟧ = ⊥ are clear once we understand the mathematical meanings of ⊥, but it's also clear that those each have different effects "in reality". This is sort of like "undefined behavior" of C... it's behavior that's undefined so far as the semantic domain is concerned. You might call it semantically unobservable.
So how does pattern matching return ⊥?
So what does it mean "semantically" to return ⊥? Well, ⊥ is a perfectly valid semantic value which has the infection property which models non-determinism (or asynchronous error throwing). From the semantic point of view it's a perfectly valid value which can be returned as is.
From the implementation point of view, you have a number of choices, each of which map to the same semantic value. undefined isn't quite right, nor is entering an infinite loop, so if you're going to pick a semantically undefined behavior you might as well pick one that's useful and throw an error
*** Exception: <interactive>:2:5-14: Non-exhaustive patterns in function cheers

What are Haskell's strictness points?

We all know (or should know) that Haskell is lazy by default. Nothing is evaluated until it must be evaluated. So when must something be evaluated? There are points where Haskell must be strict. I call these "strictness points", although this particular term isn't as widespread as I had thought. According to me:
Reduction (or evaluation) in Haskell only occurs at strictness points.
So the question is: what, precisely, are Haskell's strictness points? My intuition says that main, seq / bang patterns, pattern matching, and any IO action performed via main are the primary strictness points, but I don't really know why I know that.
(Also, if they're not called "strictness points", what are they called?)
I imagine a good answer will include some discussion about WHNF and so on. I also imagine it might touch on lambda calculus.
Edit: additional thoughts about this question.
As I've reflected on this question, I think it would be clearer to add something to the definition of a strictness point. Strictness points can have varying contexts and varying depth (or strictness). Falling back to my definition that "reduction in Haskell only occurs at strictness points", let us add to that definition this clause: "a strictness point is only triggered when its surrounding context is evaluated or reduced."
So, let me try to get you started on the kind of answer I want. main is a strictness point. It is specially designated as the primary strictness point of its context: the program. When the program (main's context) is evaluated, the strictness point of main is activated. Main's depth is maximal: it must be fully evaluated. Main is usually composed of IO actions, which are also strictness points, whose context is main.
Now you try: discuss seq and pattern matching in these terms. Explain the nuances of function application: how is it strict? How is it not? What about deepseq? let and case statements? unsafePerformIO? Debug.Trace? Top-level definitions? Strict data types? Bang patterns? Etc. How many of these items can be described in terms of just seq or pattern matching?
A good place to start is by understanding this paper: A Natural Semantics for Lazy Evalution (Launchbury). That will tell you when expressions are evaluated for a small language similar to GHC's Core. Then the remaining question is how to map full Haskell to Core, and most of that translation is given by the Haskell report itself. In GHC we call this process "desugaring", because it removes syntactic sugar.
Well, that's not the whole story, because GHC includes a whole raft of optimisations between desugaring and code generation, and many of these transformations will rearrange the Core so that things get evaluated at different times (strictness analysis in particular will cause things to be evaluated earlier). So to really understand how your
program will be evaluated, you need to look at the Core produced by GHC.
Perhaps this answer seems a bit abstract to you (I didn't specifically mention bang patterns or seq), but you asked for something precise, and this is about the best we can do.
I would probably recast this question as, Under what circumstances will Haskell evaluate an expression? (Perhaps tack on a "to weak head normal form.")
To a first approximation, we can specify this as follows:
Executing IO actions will evaluate any expressions that they “need.” (So you need to know if the IO action is executed, e.g. it's name is main, or it is called from main AND you need to know what the action needs.)
An expression that is being evaluated (hey, that's a recursive definition!) will evaluate any expressions it needs.
From your intuitive list, main and IO actions fall into the first category, and seq and pattern matching fall into the second category. But I think that the first category is more in line with your idea of "strictness point", because that is in fact how we cause evaluation in Haskell to become observable effects for users.
Giving all of the details specifically is a large task, since Haskell is a large language. It's also quite subtle, because Concurrent Haskell may evaluate things speculatively, even though we end up not using the result in the end: this is a third breed of things that cause evaluation. The second category is quite well studied: you want to look at the strictness of the functions involved. The first category too can be thought to be a sort of "strictness", though this is a little dodgy because evaluate x and seq x $ return () are actually different things! You can treat it properly if you give some sort of semantics to the IO monad (explicitly passing a RealWorld# token works for simple cases), but I don't know if there's a name for this sort of stratified strictness analysis in general.
C has the concept of sequence points, which are guarantees for particular operations that one operand will be evaluated before the other. I think that's the closest existing concept, but the essentially equivalent term strictness point (or possibly force point) is more in line with Haskell thinking.
In practice Haskell is not a purely lazy language: for instance pattern matching is usually strict (So trying a pattern match forces evaluation to happen at least far enough to accept or reject the match.
…
Programmers can also use the seq primitive to force an expression to evaluate regardless of whether the result will ever be used.
$! is defined in terms of seq.
—Lazy vs. non-strict.
So your thinking about !/$! and seq is essentially right, but pattern matching is subject to subtler rules. You can always use ~ to force lazy pattern matching, of course. An interesting point from that same article:
The strictness analyzer also looks for cases where sub-expressions are always required by the outer expression, and converts those into eager evaluation. It can do this because the semantics (in terms of "bottom") don't change.
Let's continue down the rabbit hole and look at the docs for optimisations performed by GHC:
Strictness analysis is a process by which GHC attempts to determine, at compile-time, which data definitely will 'always be needed'. GHC can then build code to just calculate such data, rather than the normal (higher overhead) process for storing up the calculation and executing it later.
—GHC Optimisations: Strictness Analysis.
In other words, strict code may be generated anywhere as an optimisation, because creating thunks is unnecessarily expensive when the data will always be needed (and/or may only be used once).
…no more evaluation can be performed on the value; it is said to be in normal form. If we are at any of the intermediate steps so that we've performed at least some evaluation on a value, it is in weak head normal form (WHNF). (There is also a 'head normal form', but it's not used in Haskell.) Fully evaluating something in WHNF reduces it to something in normal form…
—Wikibooks Haskell: Laziness
(A term is in head normal form if there is no beta-redex in head position1. A redex is a head redex if it is preceded only by lambda abstractors of non-redexes 2.) So when you start to force a thunk, you're working in WHNF; when there are no more thunks left to force, you're in normal form. Another interesting point:
…if at some point we needed to, say, print z out to the user, we'd need to fully evaluate it…
Which naturally implies that, indeed, any IO action performed from main does force evaluation, which should be obvious considering that Haskell programs do, in fact, do things. Anything that needs to go through the sequence defined in main must be in normal form and is therefore subject to strict evaluation.
C. A. McCann got it right in the comments, though: the only thing that's special about main is that main is defined as special; pattern matching on the constructor is sufficient to ensure the sequence imposed by the IO monad. In that respect only seq and pattern-matching are fundamental.
Haskell is AFAIK not a pure lazy language, but rather a non-strict language. This means that it does not necessarily evaluate terms at the last possible moment.
A good source for haskell's model of "lazyness" can be found here: http://en.wikibooks.org/wiki/Haskell/Laziness
Basically, it is important to understand the difference between a thunk and the weak header normal form WHNF.
My understanding is that haskell pulls computations through backwards as compared to imperative languages. What this means is that in the absence of "seq" and bang patterns, it will ultimately be some kind of side effect that forces the evaluation of a thunk, which may cause prior evaluations in turn (true lazyness).
As this would lead to a horrible space leak, the compiler then figures out how and when to evaluate thunks ahead of time to save space. The programmer can then support this process by giving strictness annotations (en.wikibooks.org/wiki/Haskell/Strictness , www.haskell.org/haskellwiki/Performance/Strictness) to further reduce space usage in form of nested thunks.
I am not an expert in the operational semantics of haskell, so I will just leave the link as a resource.
Some more resources:
http://www.haskell.org/haskellwiki/Performance/Laziness
http://www.haskell.org/haskellwiki/Haskell/Lazy_Evaluation
Lazy doesn't mean do nothing. Whenever your program pattern matches a case expression, it evaluates something -- just enough anyway. Otherwise it can't figure out which RHS to use. Don't see any case expressions in your code? Don't worry, the compiler is translating your code to a stripped down form of Haskell where they are hard to avoid using.
For a beginner, a basic rule of thumb is let is lazy, case is less lazy.
This is not a full answer aiming for karma, but just a piece of the puzzle -- to the extent that this is about semantics, bear in mind that there are multiple evaluation strategies that provide the same semantics. One good example here -- and the project also speaks to how we typically think of Haskell semantics -- was the Eager Haskell project, which radically altered evaluation strategies while maintaining the same semantics: http://csg.csail.mit.edu/pubs/haskell.html
The Glasgow Haskell compiler translates your code into a Lambda-calculus-like language called core. In this language, something is going to be evaluated, whenever you pattern match it by a case-statement. Thus if a function is called, the outermost constructor and only it (if there are no forced fields) is going to be evaluated. Anything else is canned in a thunk. (Thunks are introduced by let bindings).
Of course this is not exactly what happens in the real language. The compiler convert Haskell into Core in a very sophisticated way, making as many things as possibly lazy and anything that is always needed lazy. Additionally, there are unboxed values and tuples that are always strict.
If you try to evaluate a function by hand, you can basically think:
Try to evaluate the outermost constructor of the return.
If anything else is needed to get the result (but only if it's really needed) is also going to be evaluated. The order doesn't matters.
In case of IO you have to evaluate the results of all statements from the first to the last in that. This is a bit more complicated, since the IO monad does some tricks to force evaluation in a specific order.
We all know (or should know) that Haskell is lazy by default. Nothing is evaluated until it must be evaluated.
No.
Haskell is not a lazy language
Haskell is a language in which evaluation order doesn't matter because there are no side effects.
It's not quite true that evaluation order doesn't matter, because the language allows for infinite loops. If you aren't careful, it's possible to get stuck in a cul-de-sac where you evaluate a subexpression forever when a different evaluation order would have led to termination in finite time. So it's more accurate to say:
Haskell implementations must evaluate the program in a way that terminates if there is any evaluation order that terminates. Only if every possible evaluation order fails to terminate can the implementation fail to terminate.
This still leaves implementations with a huge freedom in how they evaluate the program.
A Haskell program is a single expression, namely let {all top-level bindings} in Main.main. Evaluation can be understood as a sequence of reduction (small-)steps which change the expression (which represents the current state of the executing program).
You can divide reduction steps into two categories: those that are provably necessary (provably will be part of any terminating sequence of reductions), and those that aren't. You can divide the provably necessary reductions somewhat vaguely into two subcategories: those that are "obviously" necessary, and those that require some nontrivial analysis to prove them necessary.
Performing only obviously necessary reductions is what's called "lazy evaluation". I don't know whether a purely lazy evaluating implementation of Haskell has ever existed. Hugs may have been one. GHC definitely isn't.
GHC performs reduction steps at compile time that aren't provably necessary; for example, it will replace 1+2::Int with 3::Int even if it can't prove that the result will be used.
GHC may also perform not-provably-necessary reductions at run time in some circumstances. For example, when generating code to evaluate f (x+y), if x and y are of type Int and their values will be known at run time, but f can't be proven to use its argument, there is no reason not to compute x+y before calling f. It uses less heap space and less code space and is probably faster even if the argument isn't used. However, I don't know whether GHC actually takes these sorts of optimization opportunities.
GHC definitely performs evaluation steps at run time that are proven necessary only by fairly complex cross-module analyses. This is extremely common and may represent the bulk of the evaluation of realistic programs. Lazy evaluation is a last-resort fallback evaluation strategy; it isn't what happens as a rule.
There was an "optimistic evaluation" branch of GHC that did much more speculative evaluation at run time. It was abandoned because of its complexity and the ongoing maintenance burden, not because it didn't perform well. If Haskell was as popular as Python or C++ then I'm sure there would be implementations with much more sophisticated runtime evaluation strategies, maintained by deep-pocketed corporations. Non-lazy evaluation isn't a change to the language, it's just an engineering challenge.
Reduction is driven by top-level I/O, and nothing else
You can model interaction with the outside world by means of special side-effectful reduction rules like: "If the current program is of the form getChar >>= <expr>, then get a character from standard input and reduce the program to <expr> applied to the character you got."
The entire goal of the run time system is to evaluate the program until it has one of these side-effecting forms, then do the side effect, then repeat until the program has some form that implies termination, such as return ().
There are no other rules about what is reduced when. There are only rules about what can reduce to what.
For example, the only rules for if expressions are that if True then <expr1> else <expr2> can be reduced to <expr1>, if False then <expr1> else <expr2> can be reduced to <expr2>, and if <exc> then <expr1> else <expr2>, where <exc> is an exceptional value, can be reduced to an exceptional value.
If the expression representing the current state of your program is an if expression, you have no choice but to perform reductions on the condition until it's True or False or <exc>, because that's the only way you'll ever get rid of the if expression and have any hope of reaching a state that matches one of the I/O rules. But the language specification doesn't tell you to do that in so many words.
These sorts of implicit ordering constraints are the only way that evaluation can be "forced" to happen. This is a frequent source of confusion for beginners. For example, people sometimes try to make foldl more strict by writing foldl (\x y -> x `seq` x+y) instead of foldl (+). This doesn't work, and nothing like it can ever work, because no expression can make itself evaluate. The evaluation can only "come from above". seq is not special in any way in this regard.
Reduction happens everywhere
Reduction (or evaluation) in Haskell only occurs at strictness points. [...] My intuition says that main, seq / bang patterns, pattern matching, and any IO action performed via main are the primary strictness points [...].
I don't see how to make sense of that statement. Every part of the program has some meaning, and that meaning is defined by reduction rules, so reduction happens everywhere.
To reduce a function application <expr1> <expr2>, you have to evaluate <expr1> until it has a form like (\x -> <expr1'>) or (getChar >>=) or something else that matches a rule. But for some reason function application doesn't tend to show up on lists of expressions that allegedly "force evaluation", while case always does.
You can see this misconception in a quote from the Haskell wiki, found in another answer:
In practice Haskell is not a purely lazy language: for instance pattern matching is usually strict
I don't understand what could qualify as a "purely lazy language" to whoever wrote that, except, perhaps, a language in which every program hangs because the runtime never does anything. If pattern matching is a feature of your language then you've got to actually do it at some point. To do it, you have to evaluate the scrutinee enough to determine whether it matches the pattern. That's the laziest way to match a pattern that is possible in principle.
~-prefixed patterns are often called "lazy" by programmers, but the language spec calls them "irrefutable". Their defining property is that they always match. Because they always match, you don't have to evaluate the scrutinee to determine whether they match or not, so a lazy implementation won't. The difference between regular and irrefutable patterns is what expressions they match, not what evaluation strategy you're supposed to use. The spec says nothing about evaluation strategies.
main is a strictness point. It is specially designated as the primary strictness point of its context: the program. When the program (main's context) is evaluated, the strictness point of main is activated. [...] Main is usually composed of IO actions, which are also strictness points, whose context is main.
I'm not convinced that any of that has any meaning.
Main's depth is maximal: it must be fully evaluated.
No, main only has to be evaluated "shallowly", to make I/O actions appear at the top level. main is the entire program, and the program isn't completely evaluated on every run because not all code is relevant to every run (in general).
discuss seq and pattern matching in these terms.
I already talked about pattern matching. seq can be defined by rules that are similar to case and application: for example, (\x -> <expr1>) `seq` <expr2> reduces to <expr2>. This "forces evaluation" in the same way that case and application do. WHNF is just a name for what these expressions "force evaluation" to.
Explain the nuances of function application: how is it strict? How is it not?
It's strict in its left expression just like case is strict in its scrutinee. It's also strict in the function body after substitution just like case is strict in the RHS of the selected alternative after substitution.
What about deepseq?
It's just a library function, not a builtin.
Incidentally, deepseq is semantically weird. It should take only one argument. I think that whoever invented it just blindly copied seq with no understanding of why seq needs two arguments. I count deepseq's name and specification as evidence that a poor understanding of Haskell evaluation is common even among experienced Haskell programmers.
let and case statements?
I talked about case. let, after desugaring and type checking, is just a way of writing an arbitrary expression graph in tree form. Here's a paper about it.
unsafePerformIO?
To an extent it can be defined by reduction rules. For example, case unsafePerformIO <expr> of <alts> reduces to unsafePerformIO (<expr> >>= \x -> return (case x of <alts>)), and at the top level only, unsafePerformIO <expr> reduces to <expr>.
This doesn't do any memoization. You could try to simulate memoization by rewriting every unsafePerformIO expression to explicitly memoize itself, and creating the associated IORefs... somewhere. But you could never reproduce GHC's memoization behavior, because it depends on unpredictable details of the optimization process, and because it isn't even type safe (as shown by the infamous polymorphic IORef example in the GHC documentation).
Debug.Trace?
Debug.Trace.trace is just a simple wrapper around unsafePerformIO.
Top-level definitions?
Top-level variable bindings are the same as nested let bindings. data, class, import, and such are a whole different ball game.
Strict data types? Bang patterns?
Just sugar for seq.

Eager evaluation/applicative order and lazy evaluation/normal order

As far as I know, eager evaluation/applicative order evaluates all arguments to a function before applying it, on the other hand, lazy evaluation/normal order evaluates the arguments only when needed.
So, what are the differences between the pair of terms eager evaluation and applicative order, and lazy evaluation and normal order?
Thanks.
Lazy evaluation evaluates a term at most once, while normal order would evaluate it as often as it appears. So for example if you have f(x) = x+x and you call it as f(g(42)) then g(42) is called once under lazy evaluation or applicative order, but twice under normal order.
Eager evaluation and applicative order are synonymous, at least when using the definition of applicative order found in Structure and Interpretation of Computer Programs, which seems to match yours. (Wikipedia defines applicative order a bit differently and has it as a special case of eager evaluation).
I'm reading SICP too, and I've been curious by the definition of normal order given by the authors. It seemed rather similar to Lazy evaluation to me, so I went looking for some more information regarding both.
I know this question was asked a long time ago, but I looked at the FAQ and found no mention of answering old questions, so I thought I'd leave what I've found here so other people could use it in the future.
This is what I've found, and I'm inclined to agree with those:
I would argue (as have others) that lazy evaluation and
NormalOrderEvaluation are two different things; the difference is
alluded to above. In lazy evaluation, evaluation of the argument is
deferred until it is needed, at which point the argument is evaluated
and its result saved (memoized). Further uses of the argument in the
function use the computed value. The C/C++ operators ||, &&, and ? :
are both examples of lazy evaluation. (Unless some newbie C/C++
programmer is daft enough to overload && or ||, in which case the
overloaded versions are evaluated in strict order; which is why the &&
and || operators should NEVER be overloaded in C++).
In other words, each argument is evaluated at most once, possibly not
at all.
NormalOrderEvaluation, on the other hand, re-evaluates the expression
each time it is used. Think of C macros, CallByName in languages which
support it, and the semantics of looping control structures, etc.
Normal-order evaluation can take much longer than applicative order
evaluation, and can cause side effects to happen more than once.
(Which is why, of course, statements with side effects generally ought
not be given as arguments to macros in C/C++)
If the argument is invariant and has no side effects, the only
difference between the two is performance. Indeed, in a purely
functional language, lazy eval can be viewed as an optimization of
normal-order evaluation. With side effects present, or expressions
which can return a different value when re-evaluated, the two have
different behavior; normal order eval in particular has a bad
reputation in procedural languages due to the difficulty of reasoning
about such programs without ReferentialTransparency
Should also be noted that strict-order evaluation (as well as lazy
evaluation) can be achieved in a language which supports normal-order
evaluation via explicit memoing. The opposite isn't true; it requires
passing in thunks, functions, or objects which can be called/messaged
in order to defer/repeat the evaluation.
And
Lazy evaluation combines normal-order evaluation and sharing:
• Never evaluate something until you have to (normal-order)
• Never evaluate something more than once (sharing)
http://c2.com/cgi/wiki?LazyEvaluation
http://cs.anu.edu.au/student/comp3610/lectures/Lazy.pdf
From Kevin Sookocheff's Normal, Applicative and Lazy Evaluation post (emphases, stylistic changes mine):
Lazy Evaluation
While normal-order evaluation may result in doing extra work by
requiring function arguments to be evaluated more than once,
applicative-order evaluation may result in programs that do not
terminate where their normal-order equivalents do. In practise, most
functional programming languages solve this problem using lazy
evaluation.
With lazy evalution, we delay function evaluation in a way that avoids
multiple evaluations of the same function — thus combining the
benefits of normal-order and applicative-order evaluation.
With lazy
evaluation, we evaluate a value when it is needed, and after
evaluation all copies of that expression are updated with the new
value. In effect, a parameter passed into a function is stored in a
single location in memory so that the parameter need only be evaluated
once. That is, we remember all of the locations where we a certain
argument will be used, and when we evaluate a function, we replace the
argument with the value.
As a result, with lazy evaluation, every
parameter is evaluated at most once.
This was too long to post as a comment beneath the question, and updating existing answers with it seemed inappropriate, hence this answer.

Resources