Generator, Selector Pattern to calculate approximations in Haskell - haskell

I am trying to implement a generator, selector pattern to approximately calculate square roots in haskell
My generator looks like this:
generator :: (Double -> Double) -> Double -> [Double]
generator f a = generator f (f a)
My selector:
selector :: Double -> [Double] -> Double
selector eps (a : b : r)
| abs(a - b) <= eps = b
| otherwise = selector eps (b : r)
And the approx function:
next :: Double -> Double -> Double
next n x = (x + n/x) / 2
Calling this like selector 0.1 (generator (next 5) 2)
should give me ...(next 5( next 5 (next 5 2))) so [2.25, 2.23611111111111, 2.2360679779158,...] since my eps parameter is 0.1 abs(a - b) <= eps should be true on the first execution giving me 2.23611111111111 as a result. I do however end in a endless loop.
Could somebody explain to me what is wrong in the implementation of those functions?
Thanks in advance

This definition
generator f a = generator f (f a)
never generates any list elements: it gets stuck into an infinite recursion instead. You probably want
generator f a = a : generator f (f a)
which makes a to be the first element, followed by all the others we generate using recursion.
It could also be beneficial to avoid putting unevaluated thunks in the list. To avoid that, one could use
generator f a = a `seq` (a : generator f (f a))
so that a is evaluated early. This should not matter much in your code, since the
selector immediately evaluates the thunks as soon as they are generated.

Your generator function is missing the a:, as chi's answer correctly points out. However, there's a better solution than just adding that. Get rid of generator altogether, and use the built-in method iterate instead (or iterate' from Data.List if you want to avoid unevaluated thunks). These methods have the same behavior that you want from generate, but support optimizations like list fusion that your own method won't. And of course, there's also the advantage that it's one less function that you have to write and maintain.

Related

Partial Derivatives in Haskell

A while back a friend wanted help with a program that could solve for the roots of functions using Newton's method, and naturally for that I needed some way to calculate the derivative of a function numerically, and this is what I came up with:
deriv f x = (f (x+h) - f x) / h where h = 0.00001
Newton's method was a fairly easy thing to implement, and it works rather well. But now I've started to wonder - Is there some way I could use this function to solve partial derivatives in a numerical manner, or is that something that would require a full-on CAS? I would post my attempts but I have absolutely no clue what to do yet.
Please keep in mind that I am new to Haskell. Thank you!
You can certainly do much the same thing as you already implemented, only with multivariate perturbation instead. But first, as you should always do with top-level functions, add a type signature:
deriv :: (Double -> Double) -> Double -> Double
That's not the most general signature possible, but probably sufficiently general for everything you'll need. I'll call
type ℝ = Double
in the following for brevity, i.e.
deriv :: (ℝ -> ℝ) -> ℝ -> ℝ
Now what you want is, for example in ℝ²
grad :: ((ℝ,ℝ) -> ℝ) -> (ℝ,ℝ) -> (ℝ,ℝ)
grad f (x,y) = ((f (x+h,y) - f (x,y)) / h, (f (x,y+h) - f (x,y)) / h)
where h = 0.00001
It's awkward to have to write out the components individually and make the definition specific to a particular-dimensional vector space. A generic way of doing it:
import Data.VectorSpace
import Data.Basis
grad :: (HasBasis v, Scalar v ~ ℝ) => (v -> ℝ) -> v -> v
grad f x = recompose [ (e, (f (x ^+^ h*^basisValue b) - f x) ^/ h)
| (e,_) <- decompose x ]
where h = 0.00001
Note that this pre-chosen-step–finite-differentiation is always a tradeoff between inaccuracy from higher-order terms and from floating-point errors, so definitely check out automatic differentiation.
This is called automatic differentiation and there is a lot of really neat work in this area in Haskell, though I don't know how accessible it is.
From the wiki page:
A paper Beautiful Differentiation and the corresponding talk.
Forward mode libraries: ad, fad, vector-space, Data.Ring.Module.AutomaticDifferentiation
Reverse mode libraries: also ad, rad

Why is this version of 'fix' more efficient in Haskell?

In Haskell, this is a simple (naive) definition of a fixed point
fix :: (a -> a) -> a
fix f = f (fix f)
But, here is how Haskell actually implements it (more efficient)
fix f = let x = f x in x
My question is why is the second one more efficient than the first?
The slow fix calls f on each step of recursion, while the fast one calls f exactly once. It can be visualized with tracing:
import Debug.Trace
fix f = f (fix f)
fix' f = let x = f x in x
facf :: (Int -> Int) -> Int -> Int
facf f 0 = 1
facf f n = n * f (n - 1)
tracedFacf x = trace "called" facf x
fac = fix tracedFacf
fac' = fix' tracedFacf
Now try some running:
> fac 3
called
called
called
called
6
> fac' 3
called
6
In more detail, let x = f x in x results in a lazy thunk being allocated for x, and a pointer to this thunk is passed to f. On first evaluating fix' f, the thunk is evaluated and all references to it (here specifically: the one we pass to f) are redirected to the resulting value. It just happens that x is given a value that also contains a reference to x.
I admit this can be rather mind-bending. It's something that one should get used to when working with laziness.
I don't think this always (or necessarily ever) helps when you're calling fix with a function that takes two arguments to produce a function taking one argument. You'd have to run some benchmarks to see. But you can also call it with a function taking one argument!
fix (1 :)
is a circular linked list. Using the naive definition of fix, it would instead be an infinite list, with new pieces built lazily as the structure is forced.
I believe this has been asked already, but I couldn't find the answer. The reason is that the first version
fix f = f (fix f)
is a recursive function, so it can't be inlined and then optimized. From the GHC manual:
For example, for a self-recursive function, the loop breaker can only be the function itself, so an INLINE pragma is always ignored.
But
fix f = let x = f x in x
isn't recursive, the recursion is moved into the let binding, so it's possible to inline it.
Update: I did some tests and while the former version doesn't inline while the latter does, it doesn't seem to be crucial for performance. So the other explanations (a single object on heap vs creating one every iteration) seem to be more accurate.

Can fixed-point functions be used on polynomials?

I was thinking of a way to represent algebraic numbers in Haskell as a stream of approximations. You could probably do this by some root finding algorithm. But that's no fun. So you could add x to the polynomial, reducing the problem to finding it's fixed points.
So if you have a function in Haskell like
f :: Double -> Double
f x = x ^ 2 + x
I don't conceptually understand why fix doesn't work, which is to say, I can easily verify for myself that it doesn't work, but isn't 0 the true least fixed point of f? Is there another simple (as in definition size) fixed point function that would work?
Here is the implementation of the fix function:
fix :: (a -> a) -> a
fix f = let x = f x in x
It doesn't work for primitive types like Double. It's intended for types that have a more complex structure to them. For instance:
g :: Maybe Int -> Maybe Int
g i = Just $ case i of
Nothing -> 3
Just _ -> 4
This function will work with fix because it yields information about its result faster than it reads its input. in other words, the Just portion is known without looking at i at all, which enables it to reach a fixed point.
When your function is Double -> Double, and examines its input, fix won't work because there's no way to partially evaluate a Double.

Are there any Haskell libraries for integrating complex functions?

How to numerically integrate complex, complex-valued functions in Haskell?
Are there any existing libraries for it? numeric-tools operates only on reals.
I am aware that on complex plane there's only line integrals, so the interface I am interested in is something like this:
i = integrate f x a b precision
to calculate integral along straight line from a to b of function f on point x.
i, x, a, b are all of Complex Double or better Num a => Complex a type.
Please... :)
You can make something like this yourself. Suppose you have a function realIntegrate of type (Double -> Double) -> (Double,Double) -> Double, taking a function and a tuple containing the lower and upper bounds, returning the result to some fixed precision. You could define realIntegrate f (lo,hi) = quadRomberg defQuad (lo,hi) f using numeric-tools, for example.
Then we can make your desired function as follows - I'm ignoring the precision for now (and I don't understand what your x parameter is for!):
integrate :: (Complex Double -> Complex Double) -> Complex Double -> Complex Double -> Complex Double
integrate f a b = r :+ i where
r = realIntegrate realF (0,1)
i = realIntegrate imagF (0,1)
realF t = realPart (f (interpolate t)) -- or realF = realPart . f . interpolate
imagF t = imagPart (f (interpolate t))
interpolate t = a + (t :+ 0) * (b - a)
So we express the path from a to b as a function on the real interval from 0 to 1 by linear interpolation, take the value of f along that path, integrate the real and imaginary parts separately (I don't know if this can give numerically badly behaving results, though) and reassemble them into the final answer.
I haven't tested this code as I don't have numeric-tools installed, but at least it typechecks :-)

Haskell type dessignation

I have to dessignate types of 2 functions(without using compiler :t) i just dont know how soudl i read these functions to make correct steps.
f x = map -1 x
f x = map (-1) x
Well i'm a bit confuse how it will be parsed
Function application, or "the empty space operator" has higher precedence than any operator symbol, so the first line parses as f x = map - (1 x), which will most likely1 be a type error.
The other example is parenthesized the way it looks, but note that (-1) desugars as negate 1. This is an exception from the normal rule, where operator sections like (+1) desugar as (\x -> x + 1), so this will also likely1 be a type error since map expects a function, not a number, as its first argument.
1 I say likely because it is technically possible to provide Num instances for functions which may allow this to type check.
For questions like this, the definitive answer is to check the Haskell Report. The relevant syntax hasn't changed from Haskell 98.
In particular, check the section on "Expressions". That should explain how expressions are parsed, operator precedence, and the like.
These functions do not have types, because they do not type check (you will get ridiculous type class constraints). To figure out why, you need to know that (-1) has type Num n => n, and you need to read up on how a - is interpreted with or without parens before it.
The following function is the "correct" version of your function:
f x = map (subtract 1) x
You should be able to figure out the type of this function, if I say that:
subtract 1 :: Num n => n -> n
map :: (a -> b) -> [a] -> [b]
well i did it by my self :P
(map) - (1 x)
(-)::Num a => a->a->->a
1::Num b=> b
x::e
map::(c->d)->[c]->[d]
map::a
a\(c->d)->[c]->[d]
(1 x)::a
1::e->a
f::(Num ((c->d)->[c]->[d]),Num (e->(c->d)->[c]->[d])) => e->(c->d)->[c]->[d]

Resources