Replacing repetitive function applications with deterministic values - haskell

f x y z = [n | n <- z, n > x + y]
f 1 2 [3,4]
Would x + y be executed only once at first so that the successive calls be replaced by the value 3 instead? Is GHC Haskell optimized up to this job for FP brings us the virtue of referential transparency?
How to trace to prove it?

I don't think the computed value will be reused.
The general problem with this kind of thing is, x + y is cheap, but you could instead have some operation there that produces an utterly vast result, which you probably don't want to keep in memory. Which is a wordy way of saying "this is a time/space tradeoff".
Because of this, it seems GHC tends to not reuse work, in case the lost space doesn't make up for the gained time.
The way to find out for sure is to ask GHC to dump Core when it compiles your code. You can then see precisely what's going to get executed. (Be prepared for it to be very verbose though!) Oh, and make sure you turn on optimisations! (I.e., the -O2 flag.)
If you rephrase your function as
f x y z = let s = x + y in [ n | n <- z, n > s ]
Now s will definitely be executed only once. (I.e., once per call to f. Each time you call f it'll still recompute s.)
Incidentally, if you're interested in saving already-computed results for the whole function, the search term you're looking for is "memoisation".

What will happen can depend on whether you are using ghci vs. ghc and then, if you are compiling the code, what optimization level is being used.
Here is one way to test the evaluations:
import Debug.Trace
f x y z = [n | n <- z, n > tx x + ty y]
where tx = trace "x"
ty = trace "y"
main = print $ f 1 2 [3,4]
With 7.8.3 I get the following results:
ghci: x y x y [4]
ghc (no optimization): x y x y [4]
ghc -O2: x y [4]
It is possible that the addition of the trace calls affects CSE optimization. But this does show that -O2 will hoist x+y out of the loop.

Related

Infinite loop on a simple list for two predicates

When i try to compile this line :
mult y = [x*2 | x <- [1..], x <= y]
And run it, I have an infinite loop that I must cancel with CTRL + C
*Main> mult 10
[2,4,6,8,10,12,14,16,18,20
Do you know why those predicate are not correctly interpreted ?
Thank you
You're looking for
mult y = [x * 2 | x <- [1..y]]
In this version, the [1..y] gets compiled to a finite list from 1 up to y. In your original code
mult y = [x * 2 | x <- [1..], x <= y]
Haskell doesn't understand complicated concepts like the nature of <= as an ordering or that [1..] is a monotonic list. So Haskell is determined to come up with every natural number, just to make sure some really big number out there doesn't happen to be less than y, by some fluke. You and I can look at that code and see that it obviously won't find any, but Haskell doesn't understand that, so it goes looking anyway.

Haskell | Are let expressions recalculated?

Lets say we have this function:
foo n = let comp n = n * n * n + 10
otherComp n = (comp n) + (comp n)
in (otherComp n) + (otherComp n)
How many times will comp n get actually executed? 1 or 4? Does Haskell "store" function results in the scope of let?
In GHCi, without optimization, four times.
> import Debug.Trace
> :{
| f x = let comp n = trace "A" n
| otherComp n = comp n + comp n
| in otherComp x + otherComp x
| :}
> f 10
A
A
A
A
40
With optimization, GHC might be able to inline the functions and optimize everything. However, in the general case, I would not count on GHC to optimize multiple calls into one. That would require memoizing and/or CSE (common subexpression elimination), which is not always an optimization, hence GHC is quite conservative about it.
As a thumb rule, when evaluating performance, expect that each (evaluated) call in the code corresponds to an actual call at runtime.
The above discussion applies to function bindings, only. For simple pattern bindings made of just a variable like
let x = g 20
in x + x
then g 20 will be computed once, bound to x, and then x + x will reuse the same value twice. With one proviso: that x gets assigned a monomorphic type.
If x gets assigned a polymorphic type with a typeclass constraint, then it acts as a function in disguise.
> let x = trace "A" (200 * 350)
> :t x
x :: Num a => a
> x + x
A
A
140000
Above, 200 * 350 has been recomputed twice, since it got a polymorphic type.
This mostly only happens in GHCi. In regular Haskell source files, GHC uses the Dreaded Monomorphism Restriction to provide x a monomorphic type, precisely to avoid recomputation of variables. If that can not be done, and duplicate computation is needed, GHC prefers to raise an error than silently cause recomputation. (In GHCi, the DMR is disabled to make more code work as it is, and recomputation happens, as seen above.)
Summing up: variable bindings let x = ... should be fine in source code, and work as expected without duplicating computation. If you want to be completely sure, annotate x with an explicit monomorphic type annotation.

Why is this tail-recursive Haskell function slower ?

I was trying to implement a Haskell function that takes as input an array of integers A
and produces another array B = [A[0], A[0]+A[1], A[0]+A[1]+A[2] ,... ]. I know that scanl from Data.List can be used for this with the function (+). I wrote the second implementation
(which performs faster) after seeing the source code of scanl. I want to know why the first implementation is slower compared to the second one, despite being tail-recursive?
-- This function works slow.
ps s x [] = x
ps s x y = ps s' x' y'
where
s' = s + head y
x' = x ++ [s']
y' = tail y
-- This function works fast.
ps' s [] = []
ps' s y = [s'] ++ (ps' s' y')
where
s' = s + head y
y' = tail y
Some details about the above code:
Implementation 1 : It should be called as
ps 0 [] a
where 'a' is your array.
Implementation 2: It should be called as
ps' 0 a
where 'a' is your array.
You are changing the way that ++ associates. In your first function you are computing ((([a0] ++ [a1]) ++ [a2]) ++ ...) whereas in the second function you are computing [a0] ++ ([a1] ++ ([a2] ++ ..)). Appending a few elements to the start of the list is O(1), whereas appending a few elements to the end of a list is O(n) in the length of the list. This leads to a linear versus quadratic algorithm overall.
You can fix the first example by building the list up in reverse order, and then reversing again at the end, or by using something like dlist. However the second will still be better for most purposes. While tail calls do exist and can be important in Haskell, if you are familiar with a strict functional language like Scheme or ML your intuition about how and when to use them is completely wrong.
The second example is better, in large part, because it's incremental; it immediately starts returning data that the consumer might be interested in. If you just fixed the first example using the double-reverse or dlist tricks, your function will traverse the entire list before it returns anything at all.
I would like to mention that your function can be more easily expressed as
drop 1 . scanl (+) 0
Usually, it is a good idea to use predefined combinators like scanl in favour of writing your own recursion schemes; it improves readability and makes it less likely that you needlessly squander performance.
However, in this case, both my scanl version and your original ps and ps' can sometimes lead to stack overflows due to lazy evaluation: Haskell does not necessarily immediately evaluate the additions (depends on strictness analysis).
One case where you can see this is if you do last (ps' 0 [1..100000000]). That leads to a stack overflow. You can solve that problem by forcing Haskell to evaluate the additions immediately, for instance by defining your own, strict scanl:
myscanl :: (b -> a -> b) -> b -> [a] -> [b]
myscanl f q [] = []
myscanl f q (x:xs) = q `seq` let q' = f q x in q' : myscanl f q' xs
ps' = myscanl (+) 0
Then, calling last (ps' [1..100000000]) works.

List comprehension takes too much memory

I'm a beginner to Haskell and used it to solve some 50 problems of Project Euler but now I'm stuck at problem 66. The problem is that the compiled code (ghc -O2 --make problem66.hs) takes all my machine's free memory after 10-20 seconds. My code looks like this:
-- Project Euler, problem 66
diophantine x y d = x^2 - d*y^2 == 1
minimalsolution d = take 1 [(x, y, d) | y <- [2..],
let x = round $ sqrt $ fromIntegral (d*y^2+1),
diophantine x y d]
issquare x = (round $ sqrt $ fromIntegral x)^2 == x
main = do
print (map minimalsolution (filter (not . issquare) [1..1000]))
I have a hunch that the problem lies in the infinite list inside the list comprehension for minimalsolution.
I actually thought that due to lazyness, Haskell would evaluate the list only until it finds one element (because of take 1) and on the way discard everything for which diophantine evaluates to False. Am I wrong there?
Interestingly, I did not see this behaviour in ghci. Is it because processing inside ghci is so much slower that I just would have to wait until I see the memory consumption explode - or is it something else?
No spoilers, please. All I want to know is where the extreme memory consumption comes from and how I can fix it.
I haven't profiled before, so stone throwers are welcome.
Haskell determines that [2..] is a constant and is reused for every element of the list, despite take 1 using only one element of that list; so it memoizes the list for computing future elements of the same list. You get stuck computing value for d=61.
Edit:
What's interesting, this one terminates for [1..1000]:
minimalsolution d = take 1 [(x, y, d) | y <- [2..] :: [Int],
let x = round $ sqrt $ fromIntegral (d*y^2+1),
diophantine x y d]
Just added :: [Int]. Memory use looks stable at 1MB. Using Int64 reproduces the problem.
minimalsolution d = take 1 [(x, y, d) | y <- [2..] :: [Int64],
let x = round $ sqrt $ fromIntegral (d*y^2+1),
diophantine x y d]
Edit:
Well, as has been suggested, the difference is caused by overflow. The solution to d=61 is reported as (5983,20568,61), but 5983^2 is nowhere near 61*20568^2.
Inside of the comprehension creating unnecessary Double instances on each value of y.
I couldn't find a solution using list comprehensions that didn't have the space blowup. But rewriting using recursion yields a stable memory profile.
diophantine :: Int -> Int -> Int -> Bool
diophantine x y d = x^2 - d*y^2 == 1
minimalsolution :: Int -> (Int, Int, Int)
minimalsolution d = go 2
where
d0 = fromIntegral d
go a =
let y = fromIntegral a
x = round $ sqrt $ (d0*y^2+1) in
if diophantine x y d then
(x, y, d)
else
go (y+1)
For what it is worth I have tested this now after 6 years and this problem does not appear anymore. The memory consumption stays very low with GHC 8.6.5. I assume that this was indeed a problem in the compiler which has been fixed at some point.

Time cost of Haskell `seq` operator

This FAQ says that
The seq operator is
seq :: a -> b -> b
x seq y will evaluate x, enough to check that it is not bottom, then
discard the result and evaluate y. This might not seem useful, but it
means that x is guaranteed to be evaluated before y is considered.
That's awfully nice of Haskell, but does it mean that in
x `seq` f x
the cost of evaluating x will be paid twice ("discard the result")?
The seq function will discard the value of x, but since the value has been evaluated, all references to x are "updated" to no longer point to the unevaluated version of x, but to instead point to the evaluated version. So, even though seq evaluates and discards x, the value has been evaluated for other users of x as well, leading to no repeated evaluations.
No, it's not compute and forget, it's compute - which forces caching.
For example, consider this code:
let x = 1 + 1
in x + 1
Since Haskell is lazy, this evaluates to ((1 + 1) + 1). A thunk, containing the sum of a thunk and one, the inner thunk being one plus one.
Let's use javascript, a non-lazy language, to show what this looks like:
function(){
var x = function(){ return 1 + 1 };
return x() + 1;
}
Chaining together thunks like this can cause stack overflows, if done repeatedly, so seq to the rescue.
let x = 1 + 1
in x `seq` (x + 1)
I'm lying when I tell you this evaluates to (2 + 1), but that's almost true - it's just that the calculation of the 2 is forced to happen before the rest happens (but the 2 is still calculated lazily).
Going back to javascript:
function(){
var x = function(){ return 1 + 1 };
return (function(x){
return x + 1;
})( x() );
}
I believe x will only be evaluated once (and the result retained for future use, as is typical for lazy operations). That behavior is what makes seq useful.
You can always check with unsafePerformIO or traceā€¦
import System.IO.Unsafe (unsafePerformIO)
main = print (x `seq` f (x + x))
where
f = (+4)
x = unsafePerformIO $ print "Batman!" >> return 3
Of course seq by itself does not "evaluate" anything. It just records the forcing order dependency. The forcing itself is triggered by pattern-matching. When seq x (f x) is forced, x will be forced first (memoizing the resulting value), and then f x will be forced. Haskell's lazy evaluation means it memoizes the results of forcing of expressions, so no repeat "evaluation" (scary quotes here) will be performed.
I put "evaluation" into scary quotes because it implies full evaluation. In the words of Haskell wikibook, "Haskell values are highly layered; 'evaluating' a Haskell value could mean evaluating down to any one of these layers."
Let me reiterate: seq by itself does not evaluate anything. seq x x does not evaluate x under any circumstance. seq x (f x) does not evaluate anything when f = id, contrary to what the report seems to have been saying.

Resources