Can't properly understand lambdas in Haskell - haskell

I have following code, implmenting inverse function calculation, basing on this formulas:
derivation :: (Fractional a) => (a -> a) -> (a -> a)
derivation f = \ x -> ( ( f (x + dx) - f (x) ) / dx ) where dx = 0.1
evalA k f
| k == 0 = \x -> x
| otherwise = \x -> (derivation (evalA (k-1) f) x) / (derivation f x)
inverseFun f x =
let
x0 = 3.0
eps = 0.001
iter k prev sum =
let
elemA = evalA k f x0
elemB = prev * (x - (f x0)) / (if k == 0 then 1 else k)
newItem = elemA * elemB
in
if abs (newItem) < eps
then sum
else iter (k + 1) elemB (sum + newItem)
in
iter 0 1.0 0.0
f1 = \x -> 1.0 * x * x
main = do
print $ inverseFun f1 2.5
I need to optimise it by moving evalA inside the inverseFun and store previous step calculation A'n/F' to reuse it on the next iteration, if possible. As far as I understand, each time evalA returns some sort of function and x applies afterwards, right before declaring elemA.
How can I convert my evalA or rewrite it to store previous results (by passing these results in iter, obviously)?
Don't mind if this calculations are not too precise, it requires good x0 and eps choice. My main question is in lambda conversion.

If you change your definition of inverseFun such that the (if k == 0 then 1 else k) is instead fromIntegral (if k == 0 then 1 :: Int else k), then you can provide type signatures to all of your functions:
derivation :: (Fractional a) => (a -> a) -> a -> a
evalA :: (Fractional a) => Int -> (a -> a) -> a -> a
inverseFun :: (Fractional a, Ord a) => (a -> a) -> a -> a
f1 :: (Fractional a) => a -> a
Which certainly helps out.
This is actually important for my solution to your problem, since we need k to be an Int, and you've used it as a Fractional a => a. The fromIntegral fixes that, but it needs to know that it's an Int, so I just added the inline type signature to help the compiler along.
Since your function only depends on the previous single value, you can use our handy friend from Prelude, iterate :: (a -> a) -> a -> [a]. This applies a function over and over again, producing an infinite list of values. We can then index it at any point to get the desired result (this is why having k an Int is important!).
Our function will look like
evalA :: Fractional a => Int -> (a -> a) -> a -> a
evalA k f = iterate go id !! k
where
go = ???
Here id is the same as your base case of \x -> x, just shorter and with more optimization rules. It serves as the initial value for generating this list. To implement go, the actual computation, we need it to accept the previous result as its argument:
where
go prev = \x -> derivation prev x / derivation f x
But this is considered "poor style" by hlint, and so it is suggested to convert this to the form
where
go prev x = derivation prev x / derivation f x
And that's it! I tested it and got the exact same result for your example input. The full code can be viewed here.

Related

How to fix "Illegal type signature" error in Haskell

I ran into the following error in Haskell:
"Type signatures are only allowed in patterns with ScopedTypeVariables"
How should I re-use the defined variables. Thanks in advance
sum :: (Double -> Double) -> (Double -> Double) -> Int ->
(Double -> Double)
sum f g n = (\x -> helper f g n x)
where
helper :: (Double -> Double) -> (Double -> Double) -> Int -> Double ->
Double
|n == 0 = 0
|mod n 2 == 1 = f(x) + helper f g n-1 f(x)
|otherwise = g(x) + helper f g n-1 g(x)
This actually looks more like a syntactical error: you never defined a function body for helper, indeed you defined the signature of helper, followed by guards (the | ... part), but you should again state helper f g n x = ....
Furthermore I don't think it is useful to define helper here with a variable for f, an g, since these remain fixed throughout the recursion.
You can probably define the function as:
sumfg :: (Double -> Double) -> (Double -> Double) -> Int -> Double -> Double
sumfg f g = helperf
where helperf 0 _ = 0
helperf i x = let fx = f x in fx + helperg (i-1) fx
helperg 0 _ = 0
helperg i x = let gx = g x in gx + helperf (i-1) gx
We here defined two "helper" functions helperf and helperg, helperf will sum up f x with helperg (i-1) (f x), and helperg does the same, except that we use g instead of f. We here thus use mutual recursion to solve the problem.
We can however solve this problem more elegantly, by making use of scanl :: (b -> a -> b) -> b -> [a] -> [b], take :: Int -> [a] and sum :: Num a => [a] -> a:
sumfg :: Num a => (a -> a) -> (a -> a) -> Int -> a -> a
sumfg f g n x = sum (take n (scanl (flip ($)) (f x) (cycle [g, f])))
Here we thus make an infinite list of g and f, like [g, f, g, f, g, f, ...] with cycle [f, g]. We then use scanl (flip ($)) to each time apply the accumulator to one of the functions, and yield that element. We take the first n items of that list with take n, and finally we use sum to sum up these values.
For example:
Prelude> sumfg (2+) (3*) 5 1
91
Since (2+1) + (3*(2+1)) + (2+(3*(2+1))) + (3*(2+(3*(2+1)))) + (2+(3*(2+(3*(2+1))))) is 91.
We also generalized the signature: we can now work with any numerical type a, with the two functions f and g of type f, g :: a -> a.

How do I avoid stackoverflow error in Haskell

I want to make this Function:
calling customPower 2 2
would give back 2^2 + 2^1 + 1
calling customPower 3 3
would give back 3^3 + 3^2 + 3^1 + 1
Here is my code:
customPower :: Int -> Int -> Int
customPower x y
| y == 0 = 1
| y > 0 = (x^(y)) + (customPower x y-1)
It gives me stack overflow exception and I can't find where is the error. Everything seems fine.
The operators have lower precedence than function calls, this means that your recursive call:
... + (customPower x y-1)
is interpreted as:
... + ((customPower x y)-1)
so you keep calling with the same parameters, therefore the recursion can never end.
We can fix this by adding brackets for y-1:
customPower :: Int -> Int -> Int
customPower x y
| y > 0 = x^y + customPower x (y-1)
| otherwise = 1
With this modifications, we do not get stuck in an infinite loop:
Prelude> customPower 5 3
156
We can rewrite the above by making use of sum :: Num a => [a] -> a and map :: (a -> b) -> [a] -> [b] to implement this with a one-liner:
customPower :: (Num a, Integral b) => a -> b -> a
customPower x y = sum (map (x^) [0..y])
or we can use iterate :: (a -> a) -> a -> [a]:
customPower :: (Num a, Integral b) => a -> b -> a
customPower x y = sum (take (y+1) (iterate (x*) 1))
Due to Haskell's laziness, the above attempts will likely still result in a call stack that scales linear with the value of y: the functions are, like #dfeuer says, not tail recursive functions, we can however work with an accumulator here:
customPower :: Int -> Int -> Int
customPower x = go 1
where go a y | y > 1 = a
| otherwise = seq a (go (a+x^y) (y-1))
since the above sum is equal to a simple formula, we can even calculate the value in O(y log x):
y
.———— y+1
╲ i x - 1
╱ x = ————————
*———— x - 1
i=0
So we can calculate the value with:
customPower :: (Integral a, Integral b) => a -> b -> a
customPower x y = div (x^(y+1) - 1) (x - 1)
This will usually work faster, although in a rare case where the result times x -1 is larger than the maximum representable number of the type a, this will result in overflow and will return the wrong number.

Haskell school of expression fix function [duplicate]

This question already has answers here:
How do I use fix, and how does it work?
(5 answers)
Closed 6 years ago.
So I am reading Paul Hudak's book "The Haskell School of Expression" and am stuck on an exercise in there.
Here it goes
Suppose function fix is defined as
fix f = f (fix f)
What is the principal type of fix? That one I know, it's b -> b -> b
But I don't understand the way fix is defined, won't it go into an infinite recursion?
Also, let the remainder function be defined as
remainder :: Integer -> Integer -> Integer
remainder a b = if a < b then a
else remainder (a - b) b
Rewrite remainder using fix so that it is non-recursive.
First of all the principal type of fix is actually (b -> b) -> b (remember that only b -> (b -> b) is the same as b -> b -> b).
In a strict language, such a definition would go into infinite recursion, but because Haskell is lazy, the arguments to a function are evaluated only if they are at any point needed. For example you can define factorial.
-- with recursion
factorial :: Int -> Int
factorial n = if n == 0 then 1 else n * factorial (n-1)
-- with `fix`
factorial' :: Int -> Int
factorial' = fix (\f n -> if n == 0 then 1 else n * f (n - 1))
Following the same pattern, you should be able to define remainder.
Playing with it a little gives us
fix f = f (fix f) -- definition
fix f a = f (fix f) a -- eta expansion
fix f a b = f (fix f) a b -- eta expansion
remainder a b = if a < b then a else remainder (a - b) b -- definition
-- we want remainder = fix f: -- equation
fix f a b = if a < b then a else (fix f) (a - b) b -- substitution
= (\g -> if a < b then a else g (a - b) b) (fix f) -- abstraction
= fix (\g -> \a b -> if a < b then a else g (a - b) b) a b -- abstraction
thus
remainder =
fix (\g a b -> if a < b then a else g (a - b) b) -- eta reduction

Type declaration for number division

I tried all possible type declarations but I can't make this code even compile. The trick is in handling types for division. I tried Num a, Fractional a, Float a etc.
cube x = x * x * x
sum' term a next b =
if a > b
then 0
else term a + sum' term (next a) next b
integral f a b n = (h / 3) * (sum' term 0 succ n) where
h = (b - a) / n
y k = f $ a + (k * h)
term k
| k == 0 || k == n = y k
| odd k = 4 * y k
| even k = 2 * y k
main = do
print $ integral cube 0 1 100 -- 0.25
print $ (\x -> 3 * x * x) 1 3 100 -- 26
I isolated problem by deleting (/) function. This code compiles without any type declaration at all:
cube x = x * x * x
sum' term a next b =
if a > b
then 0
else term a + sum' term (next a) next b
integral f a b n = (sum' term 0 succ n) where
h = (b - a)
y k = f $ a + (k * h)
term k
| k == 0 || k == n = y k
| odd k = 4 * y k
| even k = 2 * y k
main = do
print $ integral cube 0 1 100
Another question is how to debug cases like this? Haskell's error messages doesn't help much, it's kind of hard to understand something like The type variable a0 is ambiguous or Could not deduce (a1 ~ a).
P. S. It's ex. 1.29 from SICP.
Update
Final answer is:
cube :: Num a => a -> a
cube x = x * x * x
sum' :: (Int -> Double) -> Int -> (Int -> Int) -> Int -> Double
sum' term a next b =
if a > b
then 0
else term a + sum' term (next a) next b
integral :: (Double -> Double) -> Double -> Double -> Int -> Double
integral f a b n = (h / 3) * sum' term 0 (+1) n where
h = (b - a) / n' where n' = fromIntegral n
y k = f $ a + (k * h)
term k
| k == 0 || k == n = y k'
| odd k = 4 * y k'
| even k = 2 * y k'
where k' = fromIntegral k
main = do
print $ integral cube 0 1 100 -- 0.25
print $ integral cube 0 1 1000 -- 0.25
print $ integral (\x -> 3 * x * x) 1 3 100 -- 26
/ is only used for types that are instances of Fractional, for Integral types use quot. You can use quot as an infix operator using backticks:
h = (b - a) `quot` n
The types of the two are
(/) :: Fractional a => a -> a -> a
quot :: Integral a => a -> a -> a
There are no types that are instances of both Fractional and Integral, which is why none of the type signatures would work. Unfortunately GHC doesn't know that it's impossible for a type to be an instance of both classes, so the error messages are not very intuitive. You get used to the style of GHC error messages though, and the detail they give helps a lot.
Also, as was suggested in the comments, I completely agree that all top level definitions should be given type signatures (including main). It makes error messages a lot easier to read.
Edit: Based on the comments below, it looks like what you want is something more like this (type signature-wise)
cube :: Num a => a -> a
sum' :: (Int -> Double) -> Int -> (Int -> Int) -> Int -> Double
integral :: (Double -> Double) -> Double -> Double -> Int -> Double
You will need to use fromIntegral to convert from Int to Double in h and in k. The type errors should be at least a bit more readable with these type signatures though.

There is a function that searches for an attractive fixed point through iteration. Can we generalize it to monadic functions?

Intro
Fixed points are such arguments to a function that it would return unchanged: f x == x. An example would be (\x -> x^2) 1 == 1 -- here the fixed point is 1.
Attractive fixed points are those fixed points that can be found by iteration from some starting point. For example, (\x -> x^2) 0.5 would converge to 0, thus 0 is an attractive fixed point of this function.
Attractive fixed points can be, with luck, approached (and, in some cases, even reached in that many steps) from a suitable non-fixed point by iterating the function from that point. Other times, the iteration will diverge, so there should first be a proof in place that a fixed point will attract the iterating process. For some functions, the proof is common knowledge.
The code
I have tidied up some prior art that accomplishes the task neatly. I then set out to extend the same idea to monadic functions, but to no luck. This is the code I have by now:
module Fix where
-- | Take elements from a list until met two equal adjacent elements. Of those,
-- take only the first one, then be done with it.
--
-- This function is intended to operate on infinite lists, but it will still
-- work on finite ones.
converge :: Eq a => [a] -> [a]
converge = convergeBy (==)
-- \ r a = \x -> (x + a / x) / 2
-- \ -- ^ A method of computing square roots due to Isaac Newton.
-- \ take 8 $ iterate (r 2) 1
-- [1.0,1.5,1.4166666666666665,1.4142156862745097,1.4142135623746899,
-- 1.414213562373095,1.414213562373095,1.414213562373095]
-- \ converge $ iterate (r 2) 1
-- [1.0,1.5,1.4166666666666665,1.4142156862745097,1.4142135623746899,1.414213562373095]
-- | Find a fixed point of a function. May present a non-terminating function
-- if applied carelessly!
fixp :: Eq a => (a -> a) -> a -> a
fixp f = last . converge . iterate f
-- \ fixp (r 2) 1
-- 1.414213562373095
-- | Non-overloaded counterpart to `converge`.
convergeBy :: (a -> a -> Bool) -> [a] -> [a]
convergeBy _ [ ] = [ ]
convergeBy _ [x] = [x]
convergeBy eq (x: xs#(y: _))
| x `eq` y = [x]
| otherwise = x : convergeBy eq xs
-- \ convergeBy (\x y -> abs (x - y) < 0.001) $ iterate (r 2) 1
-- [1.0,1.5,1.4166666666666665,1.4142156862745097]
-- | Non-overloaded counterpart to `fixp`.
fixpBy :: (a -> a -> Bool) -> (a -> a) -> a -> a
fixpBy eq f = last . convergeBy eq . iterate f
-- \ fixpBy (\x y -> abs (x - y) < 0.001) (r 2) 1
-- 1.4142156862745097
-- | Find a fixed point of a monadic function. May present a non-terminating
-- function if applied carelessly!
-- TODO
fixpM :: (Eq a, Monad m) => (m a -> m a) -> m a -> m a
fixpM f = last . _ . iterate f
(It may be loaded in repl. There are examples to be run in the comments, for illustration.)
The problem
There is an _ in the definition of fixpM above. It is a function of type [m a] -> [m a] that should do, in principle, the same as the function converge above, but kinda lifted. I have come to suspect it can't be written.
I do have composed another, specialized code for fixpM:
fixpM :: (Eq a, Monad m) => (a -> m a) -> a -> m a
fixpM f x = do
y <- f x
if x == y
then return x
else fixpM f y
-- \ fixpM (\x -> (".", x^2)) 0.5
-- ("............",0.0)
(An example run is, again, found in a comment.)
-- But it is a whole different algorithm, not an extension / generalization of the pure function we started with. In particular, we do not pass the stage where a list of inits up to the first repetition is made available.
Can we not extend the pure algorithm to work on monadic functions?
And why so?
I would admire a hint towards a piece of theory that explains how to either prove impossibility or construct a solution in a routine fashion, but perhaps this is just a triviality I'm missing while busy typing idle questions, in which case a straightforward counterexample would defeat me.
P.S. I understand this is a somewhat trivial exercise. Still, I want to have become done with it once and forever.
P.S. 2 A better approximation to the pure variant, as suggested by #n-m (retaining iterate), would look like this:
fixpM :: (Eq a, Monad m) => (m a -> m a) -> m a -> m a
fixpM f = collapse . iterate f
where
collapse (mx: mxs #(my: _)) = do
x <- mx
y <- my
if x == y
then return x
else collapse mxs
Through the use of iterate, its behaviour with regard to the monad is different in that the effects are retained between consecutive approximations. Performance-wise, these functions are of the same complexity.
P.S. 3 A more complete rendition of the ideas offered by #n-m encodes the algorithm, as far as I can see, one to one with the pure variant:
fixpM :: (Eq a, Monad m) => (m a -> m a) -> m a -> m a
fixpM f = lastM . convergeM . iterate (f >>= \x -> return x )
convergeM :: (Monad m, Eq a) => [m a] -> m [a]
convergeM = convergeByM (==)
convergeByM :: (Monad m, Eq a) => (a -> a -> Bool) -> [m a] -> m [a]
convergeByM _ [ ] = return [ ]
convergeByM _ [mx] = mx >>= \x -> return [x]
convergeByM eq xs = do
case xs of
[ ] -> return [ ]
[mx] -> mx >>= \x -> return [x]
(mx: mxs #(my: _)) -> do
x <- mx
y <- my
if x `eq` y
then return [x]
else do
xs <- convergeM mxs
return (x:xs)
lastM :: Monad m => m [a] -> m a
lastM mxs = mxs >>= \xs -> case xs of
[] -> error "Fix.lastM: No last element!"
xs -> return . head . reverse $ xs
Unfortunately, it happens to be rather lengthy. More substantially, both these solutions have the same somewhat undesirable behaviour with regard to the effects of the monad: all the effects are retained between consecutive approximations.

Resources