I'm Haskell newbie and reading :
http://www.seas.upenn.edu/~cis194/spring13/lectures/01-intro.html
It states "In Haskell one can always “replace equals by equals”, just like you learned in algebra class.". What is meant by this and what are its advantages ?
I don't recall learning this in algebra but perhaps I do not recognise the terminology.
It means that if you know that A (an expression) is equal to B (another expression), then you may always replace A for B in any expression involving A, and vice-versa.
For instance, we know that even = not . odd. Therefore
filter even
=
filter (not . odd)
On the other hand, we know that odd satisfies the following equation
odd = (1 ==) . (`mod` 2)
As such, we also know that
filter even
=
filter (not . odd)
=
filter (not . (1 ==) . (`mod` 2))
Moreover, you know that mod 2 always returns 0 or 1. So, by case analysis, the following is valid.
not . (1 ==)
=
(0 ==)
Therefore, we can also say
filter even
=
filter ((0 ==) . (`mod` 2))
The advantage of being able to replace equals by equals is to design a program by massaging equation after equation until a suitable definition is found, like in typical solve for x kind of problems of Algebra.
In its simplest form, substituting "equals by equals" means replacing a defined identifier with its definition. For instance
let x = f 1 in x + x
can be equivalently written as
f 1 + f 1
in the sense that the result will be the same. In GHC, you can expect the second one to re-compute f 1 twice, possibly degrading performance, but the result of the sum is the same.
In impure languages, such as Ocaml, the two snippets above are instead not equivalent. This is because side effects are allowed: evaluating f 1 can have observable effects. For instance, f could be defined as follows:
(* Ocaml code *)
let f = let r = ref 0 in
fun x -> r := !r + x ; !r
Using the above definition, f has an internal mutable state, which gets incremented by its argument every time it is called, before the new state is returned. Because of this,
f 1 + f 1
would evaluate to 1 + 2 since the state is incremented twice, while
let x = f 1 in x + x
would evaluate to 1 + 1, since only one increment of the state is performed.
The consequence is that, in Ocaml, replacing x with its definition would not be a semantics-preserving program transformation. Of course, the same would hold in imperative languages, which allow side effects. Only in pure languages (Haskell, Agda, Coq, ...) the transformation is safe.
Related
I was introduced to the use of fold in defining function. I have an idea how that works but im not sure why one should do it. To me, it feels like just simplifying name of data type and data value ... Would be great if you can show me examples where it is significant to use fold.
data List a = Empty | (:-:) a (List a)
--Define elements
List a :: *
[] :: List a
(:) :: a -> List a -> List a
foldrList :: (a -> b -> b) -> b -> List a -> b
foldrList f e Empty = e
foldrList f e (x:-:xs) = f x (foldrList f e xs)
The idea of folding is a powerful one. The fold functions (foldr and foldl in the Haskell base library) come from a family of functions called Higher-Order Functions (for those who don't know - these are functions which take functions as parameters or return functions as their output).
This allows for greater code clarity as the intention of the program is more clearly expressed. A function written using fold functions strongly indicates that there is an intention to iterate over the list and apply a function repeatedly to obtain an output. Using the standard recursive method is fine for simple programs but when complexity increases it can become difficult to understand quickly what is happening.
Greater code re-use can be achieved with folding due to the nature of passing in a function as the parameter. If a program has some behaviour that is affected by the passing of a Boolean or enumeration value then this behaviour can be abstracted away into a separate function. The separate function can then be used as an argument to fold. This achieves greater flexibility and simplicity (as there are 2 simpler functions versus 1 more complex function).
Higher-Order Functions are also essential for Monads.
Credit to the comments for this question as well for being varied and informative.
Higher-order functions like foldr, foldl, map, zipWith, &c. capture common patterns of recursion so you can avoid writing manually recursive definitions. This makes your code higher-level and more readable: instead of having to step through the code and infer what a recursive function is doing, the programmer can reason about compositions of higher-level components.
For a somewhat extreme example, consider a manually recursive calculation of standard deviation:
standardDeviation numbers = step1 numbers
where
-- Calculate length and sum to obtain mean
step1 = loop 0 0
where
loop count sum (x : xs) = loop (count + 1) (sum + x) xs
loop count sum [] = step2 sum count numbers
-- Calculate squared differences with mean
step2 sum count = loop []
where
loop diffs (x : xs) = loop ((x - (sum / count)) ^ 2 : diffs) xs
loop diffs [] = step3 count diffs
-- Calculate final total and return square root
step3 count = loop 0
where
loop total (x : xs) = loop (total + x) xs
loop total [] = sqrt (total / count)
(To be fair, I went a little overboard by also inlining the summation, but this is roughly how it may typically be done in an imperative language—manually looping.)
Now consider a version using a composition of calls to standard functions, some of which are higher-order:
standardDeviation numbers -- The standard deviation
= sqrt -- is the square root
. mean -- of the mean
. map (^ 2) -- of the squares
. map (subtract -- of the differences
(mean numbers)) -- with the mean
$ numbers -- of the input numbers
where -- where
mean xs -- the mean
= sum xs -- is the sum
/ fromIntegral (length xs) -- over the length.
This more declarative code is also, I hope, much more readable—and without the heavy commenting, could be written neatly in two lines. It’s also much more obviously correct than the low-level recursive version.
Furthermore, sum, map, and length can all be implemented in terms of folds, as well as many other standard functions like product, and, or, concat, and so on. Folding is an extremely common operation on not only lists, but all kinds of containers (see the Foldable typeclass), because it captures the pattern of computing something incrementally from all elements of a container.
A final reason to use folds instead of manual recursion is performance: thanks to laziness and optimisations that GHC knows how to perform when you use fold-based functions, the compiler may fuse a series of folds (maps, &c.) together into a single loop at runtime.
I want to calculate the "e" constant using Haskell's (Prelude) built-in until function. I want to do something like this:
enumber = until (>2.7) iter (1 0)
iter x k = x + (1/(fact (k + 1)))
fact k = foldr (*) 1 [1..k]
When I try to run this code, I get this error:
Occurs check: cannot construct the infinite type: a ~ a -> a
Expected type: (a -> a) -> a -> a
Actual type: a -> a -> a
Relevant bindings include enumber :: a -> a (bound at Lab2.hs:65:1)
In the second argument of ‘until’, namely ‘iter’
In the expression: until (> 2.7) iter (1 0)
By "e" I mean e = 2.71828..
The concrete mistake that causes this error is the notation (1 0). This doesn't make any sense in Haskell, it is parsed such that 1 is a function which is applied to 0, and the result then used. You apparently mean to pass both 1 and 0 as (initial) arguments. That's what we have tuples for, written (1,0).
Now, before trying to make anything definitions, we should make clear what types we need and write them out. Always start with your type signatures, they guide you a lot to you the actual definitions should look!
enumber :: Double -- could also be a polymorphic number type, but let's keep it simple.
type Index = Double -- this should, perhaps, actually be an integer, but again for simlicity use only `Double`
fact :: Index -> Double
now, if you want to do something like enumber = until (>2.7) iter (1,0), then iter would need to both add up the series expansion, and increment the k index (until knows nothing about indices), i.e. something like
iter :: (Double, Index) -> (Double, Index)
But right now your iter has a signature more like
iter :: Double -> Index -> Double
i.e. it does not do the index-incrementing. Also, it's curried, i.e. doesn't accept the arguments as a tuple.
Let's try to work with a tuple signature:
iter :: (Double, Index) -> (Double, Index)
iter (x,k) = ( x + 1/(fact (k + 1)), k+1 )
If you want to use this with until, you have the problem that you're always working with tuples, not just with the accumulated results. You need to throw away the index, both in the termination condition and in the final result: this can easily be done with the fst function
enumber = fst $ until ((>2.7) . fst) iter (1,0)
Now, while this version of the code will type-check, it's neither elegant nor efficient nor accurate (being greater than 2.7 is hardly a meaningful condition here...). As chi remarks, a good way of summing up stuff is the scanl function.
Apart from avoiding to manually increment and pass around an index, you should also avoid calculating the entire factorial over and over again. Doing that is a pretty general code smell (there's a reason fact isn't defined in the standard libraries)
recipFacts :: [Double] -- Infinite list of reciprocal factorials, starting from 1/0!
recipFacts = go 1
where go k = 1 : map (/k) (go (k+1))
Incidentally, this can also be written as a scan: scanl (/) 1 [1..] (courtesy of Will Ness).
Next we can use scanl to calculate the partial sums, and use some termination condition. However, because the series converges so quickly, there's actually a hack that works fine and is even simpler:
enumber :: Double
enumber = sum $ takeWhile (>0) recipFacts
-- result: 2.7182818284590455
Here I've used the fact that the fast-growing factorial quickly causes the floating-point reciprocals to underflow to zero.
Of course, really there's not a need to sum anything up yourself at all here: the most to-the-point definition is
enumber = exp 1
and nothing else.
enumber = until (>2.7) iter (1 0)
-- ^^^^^
Above you are applying "function" 1 to argument 0. This can't work.
You may want to use a pair instead (1, 0). In that case, not that iter must be changed to accept and return a pair. Also, the predicate >2.7 must be adapted to pairs.
If you don't want to use pairs, you need a different approach. Look up the scanl function, which you can use to compute partial sums. Then, you can use dropWhile to discard partial sums until some good-enough predicate is satisfied.
An example: the first ten partial sums of n^2.
> take 10 $ scanl (+) 0 [ n^2 | n<-[1..] ]
[0,1,5,14,30,55,91,140,204,285]
Note that this approach works only if you compute all the list elements independently. If you want to reuse some computed value from one element to another, you need something else. E.g.
> take 10 $ snd $ mapAccumL (\(s,p) x -> ((s+p,p*2),s+p)) (0,1) [1..]
[1,3,7,15,31,63,127,255,511,1023]
Dissected:
mapAccumL (\(s,p) x -> ((s+p,p*2),s+p)) (0,1) [1..]
a b c d e
s previous sum
p previous power of two
x current element of [1..]
a next sum
b next power of two
c element in the generated list
d first sum
e first power of two
Still, I am not a big fan of mapAccumL. Using iterate and pairs looks nicer.
I'm trying to learn Haskll, and so I was trying out question 26 of Project Euler in Haskell:
http://projecteuler.net/problem=26
My solution to the problem is this:
answer26 = answer26' 1000
answer26' n = snd $ maximum $ map (\x -> cycleLength x [1]) [2..n - 1]
where
cycleLength n (r:rs)
| i /= Nothing = (1 + fromJust i, n)
| r < n = cycleLength n $ (10*r):r:rs
| otherwise = cycleLength n $ (r `mod` n):r:rs
where i = elemIndex r rs
I realize that this isn't the most efficient algorithm, but seeing as it's naively O(n^3) (where n = 1000) that is not such an issue. What I am concerned about though, is that from my understanding of monads, one of their main properties is that they in some sense "mark" anything that has used the monad. The function "fromJust" seems to fly directly in the face of that. Why does it exist? Also, assuming its existence is justified, is my usage of it in the above code good practice?
Usage of partial functions (functions that may not return a value) is generally discouraged. Functions like head and fromJust exist because they're occasionally convenient; you can sometimes write shorter code, which is more understandable to learners. Lots of functional algorithms are expressed in terms of head and tail and fromJust is conceptually the same as head.
It's usually preferable to use pattern matching, and to avoid partial functions, because it allows the compiler to catch errors for you. In your code snippet you have carefully checked that the value is never Nothing, but in large real-life codebases, code can be many years old, 1000's of lines long and maintained by many developers. It's very easy for a developer to re-order some code and miss out a check like that. With pattern-matching, it's right there in the code structure, not just in some arbitrary Bool expression.
It's not too difficult to replace your usage of fromJust with pattern-matching:
answer26 = answer26' 1000
answer26' n = snd $ maximum $ map (\x -> cycleLength x [1]) [2..n - 1]
where
cycleLength n (r:rs) = case elemIndex r rs of
Just i -> (1 + i, n)
Nothing -> if r < n
then cycleLength n $ (10*r):r:rs
else cycleLength n $ (r `mod` n):r:rs
And (I think) the result is a bit clearer too.
Edit: There's an apparently "theoretically ok" place to use fromJust mentioned in Typeclassopedia, though you will need someone other than me to explain wtf that is all about.. ;)
The monad interface doesn't include any specific function for "extracting" values from a monad, only for putting them in (return).
However, it doesn't forbid these kinds of functions either. When they exist, they will be specific to each monad (hence the multitude of run* functions: runIdentity, runReader, runWriter, runState... each with different arguments.)
By design, IO doesn't have any such "get out" function, and so it serves to "trap" impure values inside the monad. But "not being able to get out" is not a requirement for monads in general. What counts is that they respect the monad laws.
With comonads, the situation is reversed. There is a common function to extract values from them (extract) that every comonad must implement. But the functions to "put the values in", when they exist, vary for each particular comonad (env, store...)
As for fromJust, it is good practice to avoid it whenever possible because it is a partial function which may fail to match at runtime.
This pattern is so common, there is even a function for that: maybe :: b -> (a -> b) -> Maybe a -> b
In your case, if you do \x -> (cycleLength x [1], x), that is, construct the pair outside cycleLength:
cycleLength n (r:rs) = maybe (cycleLength n rs') (1+) $ elemIndex r rs where
rs'
| r < n = (10*r):r:rs
| otherwise = (r `mod` n):r:rs
Also, because you are looking just for a maximum, not the actual value, it will work even with id instead of (1+).
I've been asking a few questions about strictness, but I think I've missed the mark before. Hopefully this is more precise.
Lets say we have:
n = 1000000
f z = foldl' (\(x1, x2) y -> (x1 + y, y - x2)) z [1..n]
Without changing f, what should I set
z = ...
So that f z does not overflow the stack? (i.e. runs in constant space regardless of the size of n)
Its okay if the answer requires GHC extensions.
My first thought is to define:
g (a1, a2) = (!a1, !a2)
and then
z = g (0, 0)
But I don't think g is valid Haskell.
So your strict foldl' is only going to evaluate the result of your lambda at each step of the fold to Weak Head Normal Form, i.e. it is only strict in the outermost constructor. Thus the tuple will be evaluated, however those additions inside the tuple may build up as thunks. This in-depth answer actually seems to address your exact situation here.
W/R/T your g: You are thinking of BangPatterns extension, which would look like
g (!a1, !a2) = (a1, a2)
and which evaluates a1 and a2 to WHNF before returning them in the tuple.
What you want to be concerned about is not your initial accumulator, but rather your lambda expression. This would be a nice solution:
f z = foldl' (\(!x1, !x2) y -> (x1 + y, y - x2)) z [1..n]
EDIT: After noticing your other questions I see I didn't read this one very carefully. Your goal is to have "strict data" so to speak. Your other option, then, is to make a new tuple type that has strictness tags on its fields:
data Tuple a b = Tuple !a !b
Then when you pattern match on Tuple a b, a and b will be evaluated.
You'll need to change your function regardless.
There is nothing you can do without changing f. If f were overloaded in the type of the pair you could use strict pairs, but as it stands you're locked in to what f does. There's some small hope that the compiler (strictness analysis and transformations) can avoid the stack growth, but nothing you can count on.
This FAQ says that
The seq operator is
seq :: a -> b -> b
x seq y will evaluate x, enough to check that it is not bottom, then
discard the result and evaluate y. This might not seem useful, but it
means that x is guaranteed to be evaluated before y is considered.
That's awfully nice of Haskell, but does it mean that in
x `seq` f x
the cost of evaluating x will be paid twice ("discard the result")?
The seq function will discard the value of x, but since the value has been evaluated, all references to x are "updated" to no longer point to the unevaluated version of x, but to instead point to the evaluated version. So, even though seq evaluates and discards x, the value has been evaluated for other users of x as well, leading to no repeated evaluations.
No, it's not compute and forget, it's compute - which forces caching.
For example, consider this code:
let x = 1 + 1
in x + 1
Since Haskell is lazy, this evaluates to ((1 + 1) + 1). A thunk, containing the sum of a thunk and one, the inner thunk being one plus one.
Let's use javascript, a non-lazy language, to show what this looks like:
function(){
var x = function(){ return 1 + 1 };
return x() + 1;
}
Chaining together thunks like this can cause stack overflows, if done repeatedly, so seq to the rescue.
let x = 1 + 1
in x `seq` (x + 1)
I'm lying when I tell you this evaluates to (2 + 1), but that's almost true - it's just that the calculation of the 2 is forced to happen before the rest happens (but the 2 is still calculated lazily).
Going back to javascript:
function(){
var x = function(){ return 1 + 1 };
return (function(x){
return x + 1;
})( x() );
}
I believe x will only be evaluated once (and the result retained for future use, as is typical for lazy operations). That behavior is what makes seq useful.
You can always check with unsafePerformIO or trace…
import System.IO.Unsafe (unsafePerformIO)
main = print (x `seq` f (x + x))
where
f = (+4)
x = unsafePerformIO $ print "Batman!" >> return 3
Of course seq by itself does not "evaluate" anything. It just records the forcing order dependency. The forcing itself is triggered by pattern-matching. When seq x (f x) is forced, x will be forced first (memoizing the resulting value), and then f x will be forced. Haskell's lazy evaluation means it memoizes the results of forcing of expressions, so no repeat "evaluation" (scary quotes here) will be performed.
I put "evaluation" into scary quotes because it implies full evaluation. In the words of Haskell wikibook, "Haskell values are highly layered; 'evaluating' a Haskell value could mean evaluating down to any one of these layers."
Let me reiterate: seq by itself does not evaluate anything. seq x x does not evaluate x under any circumstance. seq x (f x) does not evaluate anything when f = id, contrary to what the report seems to have been saying.