I have been wondering about this a lot, but I haven't been able to find anything about it.
When using the seq function, how does it then really work? Everywhere, it is just explained saying that seq a b evaluates a, discards the result and returns b.
But what does that really mean? Would the following result in strict evaluation:
foo s t = seq q (bar q t) where
q = s*t
What I mean is, is q strictly evaluated before being used in bar? And would the following be equivalent:
foo s t = seq (s*t) (bar (s*t) t)
I find it a little hard getting specifics on the functionality of this function.
You're not alone. seq is probably one of the most difficult Haskell functions to use properly, for a few different reasons. In your first example:
foo s t = seq q (bar q t) where
q = s*t
q is evaluated before bar q t is evaluated. If bar q t is never evaluated, q won't be either. So if you have
main = do
let val = foo 10 20
return ()
as val is never used, it won't be evaluated. So q won't be evaluated either. If you instead have
main = print (foo 10 20)
the result of foo 10 20 is evaluated (by print), so within foo q is evaluated before the result of bar.
This is also why this doesn't work:
myseq x = seq x x
Semantically, this means the first x will be evaluated before the second x is evaluated. But if the second x is never evaluated, the first one doesn't need to be either. So seq x x is exactly equivalent to x.
Your second example may or may not be the same thing. Here, the expression s*t will be evaluated before bar's output, but it may not be the same s*t as the first parameter to bar. If the compiler performs common sub-expression elimination, it may common-up the two identical expressions. GHC can be fairly conservative about where it does CSE though, so you can't rely on this. If I define bar q t = q*t it does perform the CSE and evaluate s*t before using that value in bar. It may not do so for more complex expressions.
You might also want to know what is meant by strict evaluation. seq evaluates the first argument to weak head normal form (WHNF), which for data types means unpacking the outermost constructor. Consider this:
baz xs y = seq xs (map (*y) xs)
xs must be a list, because of map. When seq evaluates it, it will essentially transform the code into
case xs of
[] -> map (*y) xs
(_:_) -> map (*y) xs
This means it will determine if the list is empty or not, then return the second argument. Note that none of the list values are evaluated. So you can do this:
Prelude> seq [undefined] 4
4
but not this
Prelude> seq undefined 5
*** Exception: Prelude.undefined
Whatever data type you use for seqs first argument, evaluating to WHNF will go far enough to figure out the constructor and no further. Unless the data type has components that are marked as strict with a bang pattern. Then all the strict fields will also be evaluated to WHNF.
Edit: (thanks to Daniel Wagner for suggestion in comments)
For functions, seq will evaluate the expression until the function "has a lambda showing", which means that it's ready for application. Here are some examples that might demonstrate what this means:
-- ok, lambda is outermost
Prelude> seq (\x -> undefined) 'a'
'a'
-- not ok. Because of the inner seq, `undefined` must be evaluated before
-- the lambda is showing
Prelude> seq (seq undefined (\x -> x)) 'b'
*** Exception: Prelude.undefined
If you think of a lambda binding as a (built-in) data constructor, seq on functions is perfectly consistent with using it on data.
Also, "lambda binding" subsumes all types of function definitions, whether defined by lambda notation or as a normal function.
The Controversy section of the HaskellWiki's seq page has a little about some of the consequences of seq in relation to functions.
You can think of seq as:
seq a b = case a of
_ -> b
This will evaluate a to head-normal form (WHNF) and then continue with evaluating b.
Edit after augustss comment: this case ... of is the strict, GHC Core one, which always forces its argument.
Related
In Haskell, the () type has two values, namely, () and bottom. If you have an expression e :: (), there's no point in actually inspecting it, since either it's e = () or by inspecting it you're crashing a program which could otherwise have not crashed.
Hence, I figured that perhaps operations on values of type () would not inspect the value and would not distinguish between () and bottom.
However, this is wildly untrue:
▎λ ghci
GHCi, version 9.0.2: https://www.haskell.org/ghc/ :? for help
ghci> u = (undefined :: ())
ghci> show u
"*** Exception: Prelude.undefined
CallStack (from HasCallStack):
error, called at libraries/base/GHC/Err.hs:75:14 in base:GHC.Err
undefined, called at <interactive>:1:6 in interactive:Ghci1
ghci> () == u
*** Exception: Prelude.undefined
CallStack (from HasCallStack):
error, called at libraries/base/GHC/Err.hs:75:14 in base:GHC.Err
undefined, called at <interactive>:1:6 in interactive:Ghci1
ghci> f () = "ok"
ghci> f u
"*** Exception: Prelude.undefined
CallStack (from HasCallStack):
error, called at libraries/base/GHC/Err.hs:75:14 in base:GHC.Err
undefined, called at <interactive>:1:6 in interactive:Ghci1
What is the reason for this? Here are some conjectures:
For some reason that I can't think of, it's useful to be non-lazy on (). Sometimes we want that bottom to propagate.
Haskell semantics are written in such a way that destructuring any ADTs, even trivial ones, inspects them. This means that having case (undefined :: ()) of { () -> ... } not throw would be a violation of language semantics
() is an extremely special case and simply isn't worth the attention to eke out this tiny extra bit of safety in a massive language like Haskell
There's also the possible combination explanation of 2+3, that Haskell could have had semantics dictating that an expression case e of inspects e unless it is of type (), but that would pollute the language spec for relatively low benefit
I will address this part:
For some reason that I can't think of, it's useful to be non-lazy on (). Sometimes we want that bottom to propagate.
Let's have a look at Control.Parallel.Strategies (version 1, an older version). This is a module for parallel evaluation. Let's focus on one of its functions for the sake of simplicity:
parMap :: Strategy b -> (a -> b) -> [a] -> [b]
The result of parMap strat f xs is the same as map f xs, except that the list is computed in parallel. What is the strat argument? Well,
strat :: Strategy b
means
strat :: b -> ()
There are only two things you can do with strat:
call it and ignore the result, which by laziness amounts to not calling it at all;
call it and force the result, even if you know it's () or a bottom.
parMap does the latter, in parallel. This allows the caller to specify a strat argument that evaluates the list values of type b as needed. For example
parMap (\(x,y) -> ()) f xs
parMap (\(x,y) -> x `seq` ()) f xs
parMap (\(x,y) -> x `seq` y `seq` ()) f xs
are valid calls, and will cause parMap to evaluate the new list-of-pairs only to expose the pair constructor, also the first component, also the second component, respectively.
Hence, forcing the () result of strat in this case allows the user to control how much evaluation to perform during parMap, i.e. how much to force the result (in parallel), and consequently which parts of the result should be left unevaluated. (By comparison map f xs would leave the result fully unevaluated -- it is completely lazy. parMap can not do that otherwise it is not longer parallel.)
Minor digression: note that the GADT
data a :~: b where
Refl :: t :~: t
has one constructor like (). Here, it is mandatory that such values are forced as in:
foo :: Int :~: String -> Int -> String
foo Refl x = x ++ " hello"
Here the first argument must be a bottom. By forcing that, we make the function error out with an exception. If we did not force that, we would get a very nasty undefined behavior like those in C and C++, completely breaking type safety. Haskell will correctly reject any attempt to circumvent that:
foo :: Int :~: String -> Int -> String
foo _ x = x ++ " hello"
triggers a type error at compile time.
I don't know for sure, but I suspect it's none of the things you said. Instead, this is so that the language is predictable and consistent.
There are, essentially, two things you observed, and I consider them to be separate things. The first is that checking whether a x is indeed () with a case statement forces evaluation of x; the second is that the instances (of Show and Eq) are written to use a case statement.
Pattern matching: the predictable, consistent rule here is that if you write case <e0> of <pat> -> <e1>, then e0 is evaluated far enough to check whether the constructors in pat are in fact in the given places. Well, okay, there's some wrinkles here to do with irrefutable patterns; let's say instead that e0 is evaluated far enough to check whether pat actually does match! For the () type, that means that the pattern () causes full evaluation -- because you've specified the full value that you expect it to be -- while the pattern x or _ can match without further evaluation.
Class instances: the natural inductive way to specify what the various class instances do is to always have an outermost case that matches against each available constructor with simple variable patterns for the fields, then does something (presumably recursive calls) on each of the fields in turn. That is, simplifying a bit, the show implementation goes like:
show x = case x of
<Con0> field00 field01 field02 <...> -> "<Con0>"
++ " " ++ show field00
++ " " ++ show field01
++ " " ++ show field02
++ <...>
<Con1> field10 field11 field12 <...> -> "<Con1>"
++ " " ++ show field10
++ " " ++ show field11
++ " " ++ show field12
++ <...>
<...>
It is very natural for the specialization of this scheme to the single-constructor, zero-field type () to go:
show x = case x of
() -> "()"
(Additionally, the Report specifies that (==) is is always strict in both arguments; but that property would also arise naturally from the obvious way of writing a generic Eq instance derivation algorithm.) Therefore the path of least surprise is for class instances to pattern match on their argument(s).
#2 is definitely true.
The () type is just a nullary data type with special type/data constructor syntax:
data () = ()
As a result, the Haskell 2010 report, while only providing an informal semantics, makes it pretty clear in section 3.17.2 Informal Semantics of Pattern Matching that the expression:
case undefined of () -> "ack!"
will be evaluated as per rule #5:
Matching the pattern con pat1 … patn against a value, where con is a constructor defined by data, depends on the value:
If the value is of the form con v1 … vn, sub-patterns are matched left-to-right against the components of the data value; if all matches succeed, the overall match succeeds; the first to fail or diverge causes the overall match to fail or diverge, respectively.
If the value is of the form con′ v1 … vm, where con is a different constructor to con′, the match fails.
If the value is ⊥, the match diverges.
Here, the value of undefined is ⊥, so the third bullet point applies, and the match diverges. And if the match diverges, the program diverges, and if the program diverges it must terminate with an error or -- at worst -- loop forever. It cannot continue as if nothing has happened. Admittedly, this last part is not explicitly stated, but it is the only reasonable interpretation for the semantics of a divergent evaluation of an expression.
As far as I know, seq a b evaluates (forces) a and b before returning b. It does not guarantee that a is evaluated first.
pseq a b evaluates a first, then evaluates/returns b.
Now consider the following:
xseq a b = (seq a id) b
Function application needs to evaluate the left operand first (to get a lambda form), and it can't blindly evaluate the right operand before entering the function because that would violate Haskell's non-strict semantics.
Therefore (seq a id) b must evaluate seq a id first, which forces a and id (in some unspecified order (but evaluating id does nothing)), then returns id b (which is b); therefore xseq a b evaluates a before b.
Is xseq a valid implementation of pseq? If not, what's wrong with the above argument (and is it possible to define pseq in terms of seq at all)?
The answer seems to be "no, at least not without additional magic".
The problem with
xseq a b = (seq a id) b
is that the compiler can see that the result of seq a id is id, which is strict everywhere. Function application is allowed to evaluate the argument first if the function is strict, because then doing so does not change the semantics of the expression. Therefore an optimizing compiler could start evaluating b first because it knows it will eventually need it.
Can pseq be defined in terms of seq?
In GHC - yes.
As noted by Alec, you'll also need the mirror-smoke lazy:
-- for GHC 8.6.5
import Prelude(seq)
import GHC.Base(lazy)
infixr 0 `pseq`
pseq :: a -> b -> b
pseq x y = x `seq` lazy y
the definition matching its counterpart in the GHC sources; the imports are
very different.
For other Haskell implementations, this may work:
import Prelude(seq)
infixr 0 `pseq`
pseq :: a -> b -> b
pseq x y = x `seq` (case x of _ -> y)
possibly in conjunction with - at the very least - the equivalent of:
-- for GHC 8.6.5
{-# NOINLINE pseq #-}
I'll let melpomene decide if that also qualifies as mirror-smoke...
I'm trying to better understand Haskell's laziness, such as when it evaluates an argument to a function.
From this source:
But when a call to const is evaluated (that’s the situation we are interested in, here, after all), its return value is evaluated too ... This is a good general principle: a function obviously is strict in its return value, because when a function application needs to be evaluated, it needs to evaluate, in the body of the function, what gets returned. Starting from there, you can know what must be evaluated by looking at what the return value depends on invariably. Your function will be strict in these arguments, and lazy in the others.
So a function in Haskell always evaluates its own return value? If I have:
foo :: Num a => [a] -> [a]
foo [] = []
foo (_:xs) = map (* 2) xs
head (foo [1..]) -- = 4
According to the above paragraph, map (* 2) xs, must be evaluated. Intuitively, I would think that means applying the map to the entire list- resulting in an infinite loop.
But, I can successfully take the head of the result. I know that : is lazy in Haskell, so does this mean that evaluating map (* 2) xs just means constructing something else that isn't fully evaluated yet?
What does it mean to evaluate a function applied to an infinite list? If the return value of a function is always evaluated when the function is evaluated, can a function ever actually return a thunk?
Edit:
bar x y = x
var = bar (product [1..]) 1
This code doesn't hang. When I create var, does it not evaluate its body? Or does it set bar to product [1..] and not evaluate that? If the latter, bar is not returning its body in WHNF, right, so did it really 'evaluate' x? How could bar be strict in x if it doesn't hang on computing product [1..]?
First of all, Haskell does not specify when evaluation happens so the question can only be given a definite answer for specific implementations.
The following is true for all non-parallel implementations that I know of, like ghc, hbc, nhc, hugs, etc (all G-machine based, btw).
BTW, something to remember is that when you hear "evaluate" for Haskell it normally means "evaluate to WHNF".
Unlike strict languages you have to distinguish between two "callers" of a function, the first is where the call occurs lexically, and the second is where the value is demanded. For a strict language these two always coincide, but not for a lazy language.
Let's take your example and complicate it a little:
foo [] = []
foo (_:xs) = map (* 2) xs
bar x = (foo [1..], x)
main = print (head (fst (bar 42)))
The foo function occurs in bar. Evaluating bar will return a pair, and the first component of the pair is a thunk corresponding to foo [1..]. So bar is what would be the caller in a strict language, but in the case of a lazy language it doesn't call foo at all, instead it just builds the closure.
Now, in the main function we actually need the value of head (fst (bar 42)) since we have to print it. So the head function will actually be called. The head function is defined by pattern matching, so it needs the value of the argument. So fst is called. It too is defined by pattern matching and needs its argument so bar is called, and bar will return a pair, and fst will evaluate and return its first component. And now finally foo is "called"; and by called I mean that the thunk is evaluated (entered as it's sometimes called in TIM terminology), because the value is needed. The only reason the actual code for foo is called is that we want a value. So foo had better return a value (i.e., a WHNF). The foo function will evaluate its argument and end up in the second branch. Here it will tail call into the code for map. The map function is defined by pattern match and it will evaluate its argument, which is a cons. So map will return the following {(*2) y} : {map (*2) ys}, where I have used {} to indicate a closure being built. So as you can see map just returns a cons cell with the head being a closure and the tail being a closure.
To understand the operational semantics of Haskell better I suggest you look at some paper describing how to translate Haskell to some abstract machine, like the G-machine.
I always found that the term "evaluate," which I had learned in other contexts (e.g., Scheme programming), always got me all confused when I tried to apply it to Haskell, and that I made a breakthrough when I started to think of Haskell in terms of forcing expressions instead of "evaluating" them. Some key differences:
"Evaluation," as I learned the term before, strongly connotes mapping expressions to values that are themselves not expressions. (One common technical term here is "denotations.")
In Haskell, the process of forcing is IMHO most easily understood as expression rewriting. You start with an expression, and you repeatedly rewrite it according to certain rules until you get an equivalent expression that satisfies a certain property.
In Haskell the "certain property" has the unfriendly name weak head normal form ("WHNF"), which really just means that the expression is either a nullary data constructor or an application of a data constructor.
Let's translate that to a very rough set of informal rules. To force an expression expr:
If expr is a nullary constructor or a constructor application, the result of forcing it is expr itself. (It's already in WHNF.)
If expr is a function application f arg, then the result of forcing it is obtained this way:
Find the definition of f.
Can you pattern match this definition against the expression arg? If not, then force arg and try again with the result of that.
Substitute the pattern match variables in the body of f with the parts of (the possibly rewritten) arg that correspond to them, and force the resulting expression.
One way of thinking of this is that when you force an expression, you're trying to rewrite it minimally to reduce it to an equivalent expression in WHNF.
Let's apply this to your example:
foo :: Num a => [a] -> [a]
foo [] = []
foo (_:xs) = map (* 2) xs
-- We want to force this expression:
head (foo [1..])
We will need definitions for head and `map:
head [] = undefined
head (x:_) = x
map _ [] = []
map f (x:xs) = f x : map f x
-- Not real code, but a rule we'll be using for forcing infinite ranges.
[n..] ==> n : [(n+1)..]
So now:
head (foo [1..]) ==> head (map (*2) [1..]) -- using the definition of foo
==> head (map (*2) (1 : [2..])) -- using the forcing rule for [n..]
==> head (1*2 : map (*2) [2..]) -- using the definition of map
==> 1*2 -- using the definition of head
==> 2 -- using the definition of *
I believe the idea must be that in a lazy language if you're evaluating a function application, it must be because you need the result of the application for something. So whatever reason caused the function application to be reduced in the first place is going to continue to need to reduce the returned result. If we didn't need the function's result we wouldn't be evaluating the call in the first place, the whole application would be left as a thunk.
A key point is that the standard "lazy evaluation" order is demand-driven. You only evaluate what you need. Evaluating more risks violating the language spec's definition of "non-strict semantics" and looping or failing for some programs that should be able to terminate; lazy evaluation has the interesting property that if any evaluation order can cause a particular program to terminate, so can lazy evaluation.1
But if we only evaluate what we need, what does "need" mean? Generally it means either
a pattern match needs to know what constructor a particular value is (e.g. I can't know what branch to take in your definition of foo without knowing whether the argument is [] or _:xs)
a primitive operation needs to know the entire value (e.g. the arithmetic circuits in the CPU can't add or compare thunks; I need to fully evaluate two Int values to call such operations)
the outer driver that executes the main IO action needs to know what the next thing to execute is
So say we've got this program:
foo :: Num a => [a] -> [a]
foo [] = []
foo (_:xs) = map (* 2) xs
main :: IO ()
main = print (head (foo [1..]))
To execute main, the IO driver has to evaluate the thunk print (head (foo [1..])) to work out that it's print applied to the thunk head (foo [1..]). print needs to evaluate its argument on order to print it, so now we need to evaluate that thunk.
head starts by pattern matching its argument, so now we need to evaluate foo [1..], but only to WHNF - just enough to tell whether the outermost list constructor is [] or :.
foo starts by pattern matching on its argument. So we need to evaluate [1..], also only to WHNF. That's basically 1 : [2..], which is enough to see which branch to take in foo.2
The : case of foo (with xs bound to the thunk [2..]) evaluates to the thunk map (*2) [2..].
So foo is evaluated, and didn't evaluate its body. However, we only did that because head was pattern matching to see if we had [] or x : _. We still don't know that, so we must immediately continue to evaluate the result of foo.
This is what the article means when it says functions are strict in their result. Given that a call to foo is evaluated at all, its result will also be evaluated (and so, anything needed to evaluate the result will also be evaluated).
But how far it needs to be evaluated depends on the calling context. head is only pattern matching on the result of foo, so it only needs a result to WHNF. We can get an infinite list to WHNF (we already did so, with 1 : [2..]), so we don't necessarily get in an infinite loop when evaluating a call to foo. But if head were some sort of primitive operation implemented outside of Haskell that needed to be passed a completely evaluated list, then we'd be evaluating foo [1..] completely, and thus would never finish in order to come back to head.
So, just to complete my example, we're evaluating map (2 *) [2..].
map pattern matches its second argument, so we need to evaluate [2..] as far as 2 : [3..]. That's enough for map to return the thunk (2 *) 2 : map (2 *) [3..], which is in WHNF. And so it's done, we can finally return to head.
head ((2 *) 2 : map (2 *) [3..]) doesn't need to inspect either side of the :, it just needs to know that there is one so it can return the left side. So it just returns the unevaluated thunk (2 *) 2.
Again though, we only evaluated the call to head this far because print needed to know what its result is, so although head doesn't evaluate its result, its result is always evaluated whenever the call to head is.
(2 *) 2 evaluates to 4, print converts that into the string "4" (via show), and the line gets printed to the output. That was the entire main IO action, so the program is done.
1 Implementations of Haskell, such as GHC, do not always use "standard lazy evaluation", and the language spec does not require it. If the compiler can prove that something will always be needed, or cannot loop/error, then it's safe to evaluate it even when lazy evaluation wouldn't (yet) do so. This can often be faster so GHC optimizations do actually do this.
2 I'm skipping over a few details here, like that print does have some non-primitive implementation we could step inside and lazily evaluate, and that [1..] could be further expanded to the functions that actually implement that syntax.
Not necessarily. Haskell is lazy, meaning that it only evaluates when it needs to. This has some interesting effects. If we take the below code, for example:
-- File: lazinessTest.hs
(>?) :: a -> b -> b
a >? b = b
main = (putStrLn "Something") >? (putStrLn "Something else")
This is the output of the program:
$ ./lazinessTest
Something else
This indicates that putStrLn "Something" is never evaluated. But it's still being passed to the function, in the form of a 'thunk'. These 'thunks' are unevaluated values that, rather than being concrete values, are like a breadcrumb-trail of how to compute the value. This is how Haskell laziness works.
In our case, two 'thunks' are passed to >?, but only one is passed out, meaning that only one is evaluated in the end. This also applies in const, where the second argument can be safely ignored, and therefore is never computed. As for map, GHC is smart enough to realise that we don't care about the end of the array, and only bothers to compute what it needs to, in your case the second element of the original list.
However, it's best to leave the thinking about laziness to the compiler and keep coding, unless you're dealing with IO, in which case you really, really should think about laziness, because you can easily go wrong, as I've just demonstrated.
There are lots and lots of online articles on the Haskell wiki to look at, if you want more detail.
Function could evaluate either return type:
head (x:_) = x
or exception/error:
head _ = error "Head: List is empty!"
or bottom (⊥)
a = a
b = last [1 ..]
The Haskell definition says:
An expression is in weak head normal form (WHNF), if it is either:
a constructor (eventually applied to arguments) like True, Just (square 42) or (:) 1
a built-in function applied to too few arguments (perhaps none) like (+) 2 or sqrt.
or a lambda abstraction \x -> expression.
Why do built-in functions receive special treatment? According to lambda calculus, there is no difference between a partially applied function and any other function, because at the end we have only one argument functions.
A normal function applied to an argument, like the following:
(\x y -> x + 1 : y) 1
Can be reduced, to give:
\y -> 1 + 1 : y
In the first expression, the "outermost" thing was an application, so it was not in WHNF. In the second, the outermost thing is a lambda abstraction, so it is in WHNF (even though we could do more reductions inside the function body).
Now lets consider the application of a built-in (primitive) function:
(+) 1
Because this is a built-in, there's no function body in which we can substitute 1 for the first parameter. The evaluator "just knows" how to evaluate fully "saturated" applications of (+), like (+) 1 2. But there's nothing that can be done with a partially applied built-in; all we can do is produce a data structure describing "apply (+) to 1 and wait for one more argument", but that's exactly what the thing we're trying to reduce is. So we do nothing.
Built-ins are special because they're not defined by lambda calculus expressions, so the reduction process can't "see inside" their definition. Thus, unlike normal functions applications, built-in function applications have to be "reduced" by just accumulating arguments until they are fully "saturated" (in which case reduction to WHNF is by running whatever the magic implementation of the built-in is). Unsaturated built-in applications cannot be reduced any further, and so are already in WHNF.
Consider
Prelude> let f n = [(+x) | x <- [1..]] !! n
Prelude> let g = f 20000000 :: Int -> Int
g is at this point not in WHNF! You can see this by evaluating, say, g 3, which takes a noticable lag because you need WHNF before you can apply an argument. That's when the list is traversed in search for the right built-in function. But afterwards, this choice is then fixed, g is WHNF (and indeed NF: that's the same for lambdas, perhaps what you meant with your question) and thus any subsequent calls will give a result immediately.
I'm not entirely clear on how seq works in Haskell.
It seems like it there are lots of cases where it would be useful to write
seq x x
and maybe even define a function:
strict x = seq x x
but such a function doesn't already exist so I'm guessing this approach is somehow wrongheaded. Could someone tell me if this is meaningful or useful?
seq a b returns the value of b, but makes that value depend on the evaluation of a. Thus, seq a a is exactly the same thing as a.
I think the misunderstanding here is that seq doesn't take any action, because pure functions don't take actions, it just introduces a dependency.
There is a function evaluate :: a -> IO () in Control.Exception that does what you want (note that it's in IO). They put it in exception because it's useful to see if the evaluation of an expression would throw, and if so handle the exception.
The expression x = seq a b means that if x is evaluated, then a will also be evaluated (but x will be equal to b).
It does not mean "evaluate a now".
Notice that if x is being evaluated, then since x equals b, then b will also be evaluated.
And hence, if I write x = seq a a, I am saying "if x is evaluated then evaluate a". But if I just do x = a, that would achieve exactly the same thing.
When you say seq a b what you are telling the computer is,
Whenever you need to evaluate b, evaluate a for me too, please.
If we replace both a and b with x you can see why it's useless to write seq x x:
Whenever you need to evaluate x, evaluate x for me too, please.
Asking the computer to evaluate x when it needs to evaluate x is just a useless thing to do – it was going to evaluate x anyway!
seq does not evaluate anything – it simply tells the computer that when you need the second argument, also evaluate the first argument. Understanding this is actually really important, because it allows you to understand the behaviour of your programs much better.
seq x x would be entirely, trivially redundant.
Remember, seq is not a command. The presence of a seq a b in your program does not force evaluation of a or b What it does do, is it makes the evaluation of the result artificially dependent on the evaluation of a, even though the result itself is b If you print out seq a b, a will be evaluated and its result discarded.. Since x already depends on itself, seq x x is silly.
Close! deepseq (which is the "more thorough" seq -- see the docs for a full description) has the type NFData a => a -> b -> b, and force (with type NFData a => a -> a) is defined simply as
force :: (NFData a) => a -> a
force x = x `deepseq` x