Are there any Haskell libraries for integrating complex functions? - haskell

How to numerically integrate complex, complex-valued functions in Haskell?
Are there any existing libraries for it? numeric-tools operates only on reals.
I am aware that on complex plane there's only line integrals, so the interface I am interested in is something like this:
i = integrate f x a b precision
to calculate integral along straight line from a to b of function f on point x.
i, x, a, b are all of Complex Double or better Num a => Complex a type.
Please... :)

You can make something like this yourself. Suppose you have a function realIntegrate of type (Double -> Double) -> (Double,Double) -> Double, taking a function and a tuple containing the lower and upper bounds, returning the result to some fixed precision. You could define realIntegrate f (lo,hi) = quadRomberg defQuad (lo,hi) f using numeric-tools, for example.
Then we can make your desired function as follows - I'm ignoring the precision for now (and I don't understand what your x parameter is for!):
integrate :: (Complex Double -> Complex Double) -> Complex Double -> Complex Double -> Complex Double
integrate f a b = r :+ i where
r = realIntegrate realF (0,1)
i = realIntegrate imagF (0,1)
realF t = realPart (f (interpolate t)) -- or realF = realPart . f . interpolate
imagF t = imagPart (f (interpolate t))
interpolate t = a + (t :+ 0) * (b - a)
So we express the path from a to b as a function on the real interval from 0 to 1 by linear interpolation, take the value of f along that path, integrate the real and imaginary parts separately (I don't know if this can give numerically badly behaving results, though) and reassemble them into the final answer.
I haven't tested this code as I don't have numeric-tools installed, but at least it typechecks :-)

Related

Generator, Selector Pattern to calculate approximations in Haskell

I am trying to implement a generator, selector pattern to approximately calculate square roots in haskell
My generator looks like this:
generator :: (Double -> Double) -> Double -> [Double]
generator f a = generator f (f a)
My selector:
selector :: Double -> [Double] -> Double
selector eps (a : b : r)
| abs(a - b) <= eps = b
| otherwise = selector eps (b : r)
And the approx function:
next :: Double -> Double -> Double
next n x = (x + n/x) / 2
Calling this like selector 0.1 (generator (next 5) 2)
should give me ...(next 5( next 5 (next 5 2))) so [2.25, 2.23611111111111, 2.2360679779158,...] since my eps parameter is 0.1 abs(a - b) <= eps should be true on the first execution giving me 2.23611111111111 as a result. I do however end in a endless loop.
Could somebody explain to me what is wrong in the implementation of those functions?
Thanks in advance
This definition
generator f a = generator f (f a)
never generates any list elements: it gets stuck into an infinite recursion instead. You probably want
generator f a = a : generator f (f a)
which makes a to be the first element, followed by all the others we generate using recursion.
It could also be beneficial to avoid putting unevaluated thunks in the list. To avoid that, one could use
generator f a = a `seq` (a : generator f (f a))
so that a is evaluated early. This should not matter much in your code, since the
selector immediately evaluates the thunks as soon as they are generated.
Your generator function is missing the a:, as chi's answer correctly points out. However, there's a better solution than just adding that. Get rid of generator altogether, and use the built-in method iterate instead (or iterate' from Data.List if you want to avoid unevaluated thunks). These methods have the same behavior that you want from generate, but support optimizations like list fusion that your own method won't. And of course, there's also the advantage that it's one less function that you have to write and maintain.

Partial Derivatives in Haskell

A while back a friend wanted help with a program that could solve for the roots of functions using Newton's method, and naturally for that I needed some way to calculate the derivative of a function numerically, and this is what I came up with:
deriv f x = (f (x+h) - f x) / h where h = 0.00001
Newton's method was a fairly easy thing to implement, and it works rather well. But now I've started to wonder - Is there some way I could use this function to solve partial derivatives in a numerical manner, or is that something that would require a full-on CAS? I would post my attempts but I have absolutely no clue what to do yet.
Please keep in mind that I am new to Haskell. Thank you!
You can certainly do much the same thing as you already implemented, only with multivariate perturbation instead. But first, as you should always do with top-level functions, add a type signature:
deriv :: (Double -> Double) -> Double -> Double
That's not the most general signature possible, but probably sufficiently general for everything you'll need. I'll call
type ℝ = Double
in the following for brevity, i.e.
deriv :: (ℝ -> ℝ) -> ℝ -> ℝ
Now what you want is, for example in ℝ²
grad :: ((ℝ,ℝ) -> ℝ) -> (ℝ,ℝ) -> (ℝ,ℝ)
grad f (x,y) = ((f (x+h,y) - f (x,y)) / h, (f (x,y+h) - f (x,y)) / h)
where h = 0.00001
It's awkward to have to write out the components individually and make the definition specific to a particular-dimensional vector space. A generic way of doing it:
import Data.VectorSpace
import Data.Basis
grad :: (HasBasis v, Scalar v ~ ℝ) => (v -> ℝ) -> v -> v
grad f x = recompose [ (e, (f (x ^+^ h*^basisValue b) - f x) ^/ h)
| (e,_) <- decompose x ]
where h = 0.00001
Note that this pre-chosen-step–finite-differentiation is always a tradeoff between inaccuracy from higher-order terms and from floating-point errors, so definitely check out automatic differentiation.
This is called automatic differentiation and there is a lot of really neat work in this area in Haskell, though I don't know how accessible it is.
From the wiki page:
A paper Beautiful Differentiation and the corresponding talk.
Forward mode libraries: ad, fad, vector-space, Data.Ring.Module.AutomaticDifferentiation
Reverse mode libraries: also ad, rad

Easy function gives compile error on conversion from Int to Double

Why does this easy function which computes the distance between 2 integer points in the plane not compile?
distance :: (Int, Int) -> (Int, Int) -> Double
distance (x, y) (u, v) = sqrt ((x - u) ^ 2 + (y - v) ^ 2)
I get the error Couldn't match expected type ‘Double’ with actual type ‘Int’.
It is frustrating such an easy mathematical function consumes so much of my time. Any explanation why this goes wrong and the most elegant way to fix this is appreciated.
This is my solution to overcome the problem
distance :: (Int, Int) -> (Int, Int) -> Double
distance (x, y) (u, v) =
let xd = fromIntegral x :: Double
yd = fromIntegral y :: Double
ud = fromIntegral u :: Double
vd = fromIntegral v :: Double
in sqrt ((xd - ud) ^ 2 + (yd - vd) ^ 2)
but there must be a more elegant way.
Most languages only do type inference (if any) “in direction of data flow”. E.g., you start with a value 2 in Java or Python, that'll be an int. You calculate something like 2 + 4, and the + operator infers from the integer arguments that the result is also int. In dynamic languages this is the only way that's possible at all (because the types are only an “associated property” of values). In static languages like C++, the inference-step is only done once at compile time, but it's still done largely “as if the types were associated properties of values”.
Not so in Haskell. Like other Hindley-Milner languages, it has a type system that works completely independent of any runtime data flow directions. It can still do forward-inference ((2::Int) + (4::Int) is unambiguously of type Int), but it's only a special case – types can just as well be inferred in the “reverse direction”, i.e. if you write (x + y) :: Int the compiler is able to infer that both x and y must have type Int as well.
This reverse-polymorphism enables many nice tricks – example:
Prelude Debug.SimpleReflect> 2 + 4 :: Expr
2 + 4
Prelude Debug.SimpleReflect> 7^3 :: Expr
7 * 7 * 7
...but it only works if the language never does implicit conversions, not even in “safe†, obvious cases” like Int -> Integer.
Usually, the type checker automatically infers the most sensible type. For your original implementation, the checker would infer the type
distance :: Floating a => (a, a) -> (a, a) -> a
and that – or perhaps the specialised version
distance :: (Double,Double) -> (Double,Double) -> Double
is a much more sensible type than your (Int, Int) -> ... attempt, because the Euclidean distance makes actually no sense on a discrete grid (you'd want something like a Taxcab distance there).
What you'd actually want is distance from the vector-space package. This is more general, works not only on 2-tuples but any suitable space.
†Int -> Double is actually not a safe conversion – try float(1000000000000000001) in Python! So even without Hindley-Milner, this is not really a very smart thing to do implicitly.
SOLVED: now I have this
distance :: (Int, Int) -> (Int, Int) -> Double
distance (x, y) (u, v) = sqrt (fromIntegral ((x - u) ^ 2 + (y - v) ^ 2))

Can fixed-point functions be used on polynomials?

I was thinking of a way to represent algebraic numbers in Haskell as a stream of approximations. You could probably do this by some root finding algorithm. But that's no fun. So you could add x to the polynomial, reducing the problem to finding it's fixed points.
So if you have a function in Haskell like
f :: Double -> Double
f x = x ^ 2 + x
I don't conceptually understand why fix doesn't work, which is to say, I can easily verify for myself that it doesn't work, but isn't 0 the true least fixed point of f? Is there another simple (as in definition size) fixed point function that would work?
Here is the implementation of the fix function:
fix :: (a -> a) -> a
fix f = let x = f x in x
It doesn't work for primitive types like Double. It's intended for types that have a more complex structure to them. For instance:
g :: Maybe Int -> Maybe Int
g i = Just $ case i of
Nothing -> 3
Just _ -> 4
This function will work with fix because it yields information about its result faster than it reads its input. in other words, the Just portion is known without looking at i at all, which enables it to reach a fixed point.
When your function is Double -> Double, and examines its input, fix won't work because there's no way to partially evaluate a Double.

How to have an operator which adds/subtracts both absolute and relative values, in Haskell

(Apologies for the weird title, but I could not think of a better one.)
For a personal Haskell project I want to have the concepts of 'absolute values' (like a frequency) and relative values (like the ratio between two frequencies). In my context, it makes no sense to add two absolute values: one can add relative values to produce new relative values, and add a relative value to an absolute one to produce a new absolute value (and likewise for subtraction).
I've defined type classes for these: see below. However, note that the operators ##+ and #+ have a similar structure (and likewise for ##- and #-). Therefore I would prefer to merge these operators, so that I have a single addition operator, which adds a relative value (and likewise a single subtraction operator, which results in a relative value). UPDATE: To clarify, my goal is to unify my ##+ and #+ into a single operator. My goal is not to unify this with the existing (Num) + operator.
However, I don't see how to do this with type classes.
Question: Can this be done, and if so, how? Or should I not be trying?
The following is what I currently have:
{-# LANGUAGE MultiParamTypeClasses #-}
class Abs a where
nullPoint :: a
class Rel r where
zero :: r
(##+) :: r -> r -> r
neg :: r -> r
(##-) :: Rel r => r -> r -> r
r ##- s = r ##+ neg s
class (Abs a, Rel r) => AbsRel a r where
(#+) :: a -> r -> a
(#-) :: a -> a -> r
I think you're looking for a concept called a Torsor. A torsor consists of set of values, set of differences, and operator which adds a difference to a value. Additionally, the set of differences must form an additive group, so differences also can be added together.
Interestingly, torsors are everywhere. Common examples include
Points and Vectors
Dates and date-differences
Files and diffs
etc.
One possible Haskell definition is:
class Torsor a where
type TorsorOf a :: *
(.-) :: a -> a -> TorsorOf a
(.+) :: a -> TorsorOf a -> a
Here are few example instances:
instance Torsor UTCTime where
type TorsorOf UTCTime = NominalDiffTime
a .- b = diffUTCTime a b
a .+ b = addUTCTime b a
instance Torsor Double where
type TorsorOf Double = Double
a .- b = a - b
a .+ b = a + b
instance Torsor Int where
type TorsorOf Int = Int
a .- b = a - b
a .+ b = a + b
In the last case, notice that the two sets of the torsors don't need to be a different set, which makes adding your relative values together simple.
For more information, see a much nicer description in Roman Cheplyakas blog
I don't think you should be trying to unify these operators. Subtracting two vectors and subtracting two points are fundamentally different operations. The fact that it's difficult to represent them as the same thing in the type system is not the type system being awkward - it's because these two concepts really are different things!
The mathematical framework behind what you're working with is the affine space.
These are already available in Haskell in the vector-space package (do cabal install vector-space at the command prompt). Rather than using multi parameter type classes, they use type families to associate a vector (relative) type with each point (absolute) type.
Here's a minimal example showing how to define your own absolute and relative data types, and their interaction:
{-# LANGUAGE TypeFamilies #-}
import Data.VectorSpace
import Data.AffineSpace
data Point = Point { px :: Float, py :: Float }
data Vec = Vec { vx :: Float, vy :: Float }
instance AdditiveGroup Vec where
zeroV = Vec 0 0
negateV (Vec x y) = Vec (-x) (-y)
Vec x y ^+^ Vec x' y' = Vec (x+x') (y+y')
instance AffineSpace Point where
type Diff Point = Vec
Point x y .-. Point x' y' = Vec (x-x') (y-y')
Point x y .+^ Vec x' y' = Point (x+x') (y+y')
You have two answers telling you what you should do, here's another answer telling you how to do what you asked for (which might not be a good idea). :)
class Add a b c | a b -> c where
(#+) :: a -> b -> c
instance Add AbsTime RelTime AbsTime where
(#+) = ...
instance Add RelTime RelTime RelTime where
(#+) = ...
The overloading for (#+) makes it very flexible. Too flexible, IMO. The only restraint is that the result type is determined by the argument types (without this FD the operator becomes almost unusable because it constrains nothing).

Resources