Partial Derivatives in Haskell - haskell

A while back a friend wanted help with a program that could solve for the roots of functions using Newton's method, and naturally for that I needed some way to calculate the derivative of a function numerically, and this is what I came up with:
deriv f x = (f (x+h) - f x) / h where h = 0.00001
Newton's method was a fairly easy thing to implement, and it works rather well. But now I've started to wonder - Is there some way I could use this function to solve partial derivatives in a numerical manner, or is that something that would require a full-on CAS? I would post my attempts but I have absolutely no clue what to do yet.
Please keep in mind that I am new to Haskell. Thank you!

You can certainly do much the same thing as you already implemented, only with multivariate perturbation instead. But first, as you should always do with top-level functions, add a type signature:
deriv :: (Double -> Double) -> Double -> Double
That's not the most general signature possible, but probably sufficiently general for everything you'll need. I'll call
type ℝ = Double
in the following for brevity, i.e.
deriv :: (ℝ -> ℝ) -> ℝ -> ℝ
Now what you want is, for example in ℝ²
grad :: ((ℝ,ℝ) -> ℝ) -> (ℝ,ℝ) -> (ℝ,ℝ)
grad f (x,y) = ((f (x+h,y) - f (x,y)) / h, (f (x,y+h) - f (x,y)) / h)
where h = 0.00001
It's awkward to have to write out the components individually and make the definition specific to a particular-dimensional vector space. A generic way of doing it:
import Data.VectorSpace
import Data.Basis
grad :: (HasBasis v, Scalar v ~ ℝ) => (v -> ℝ) -> v -> v
grad f x = recompose [ (e, (f (x ^+^ h*^basisValue b) - f x) ^/ h)
| (e,_) <- decompose x ]
where h = 0.00001
Note that this pre-chosen-step–finite-differentiation is always a tradeoff between inaccuracy from higher-order terms and from floating-point errors, so definitely check out automatic differentiation.

This is called automatic differentiation and there is a lot of really neat work in this area in Haskell, though I don't know how accessible it is.
From the wiki page:
A paper Beautiful Differentiation and the corresponding talk.
Forward mode libraries: ad, fad, vector-space, Data.Ring.Module.AutomaticDifferentiation
Reverse mode libraries: also ad, rad

Related

How to use a refutation to direct the type checker in Haskell?

Does filling the hole in the following program necessarily require non-constructive means? If yes, is it still the case if x :~: y decidable?
More generally, how do I use a refutation to guide the type checker?
(I am aware that I can work around the problem by defining Choose as a GADT, I'm asking specifically for type families)
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE TypeOperators #-}
module PropositionalDisequality where
import Data.Type.Equality
import Data.Void
type family Choose x y where
Choose x x = 1
Choose x _ = 2
lem :: (x :~: y -> Void) -> Choose x y :~: 2
lem refutation = _
If you try hard enough to implement a function, you can convince yourself
that it is not possible. If you're not convinced, the argument can be made
more formal: we enumerate programs exhaustively to find that none is possible. It turns out there's only half a dozen of meaningful cases to consider.
I wonder why this argument is not made more often.
Totally not accurate summary:
Act I: proof search is easy.
Act II: dependent types too.
Act III: Haskell is still fine for writing dependently typed programs.
I. The proof search game
First we define the search space.
We can reduce any Haskell definition to one of the form
lem = (exp)
for some expression (exp). Now we only need to find a single expression.
Look at all possible ways of making an expression in Haskell:
https://www.haskell.org/onlinereport/haskell2010/haskellch3.html#x8-220003
(this doesn't account for extensions, exercise for the reader).
It fits a page in a single column, so it's not that big to start with.
Moreover, most of them are sugar for some form of function application or
pattern-matching; we can also desugar away type classes with dictionary
passing, so we're left with a ridiculously small lambda calculus:
lambdas, \x -> ...
pattern-matching, case ... of ...
function application, f x
constructors, C (including integer literals)
constants, c (for primitives that cannot be written in terms of the constructs above, so various built-ins (seq) and maybe FFI if that counts)
variables (bound by lambdas and cases)
We can exclude every constant on the grounds that I think the question is
really about pure lambda calculus (or the reader can enumerate the constants,
to exclude black-magic constants like undefined, unsafeCoerce,
unsafePerformIO that make everything collapse (any type is inhabited and, for
some of those, the type system is unsound), and to be left with white-magic
constants to which the present theoretical argument can be generalized via a
well-funded thesis).
We can also reasonably assume that we want a solution with no recursion involved
(to get rid of noise like lem = lem, and fix if you felt like you couldn't
part with it before), and which actually has a normal form, or preferably, a
canonical form with respect to βη-equivalence. In other words, we refine and
examine the set of possible solutions as follows.
lem :: _ -> _ has a function type, so we can assume WLOG that its definition starts with a lambda:
-- Any solution
lem = (exp)
-- is η-equivalent to a lambda
lem = \refutation -> (exp) refutation
-- so let's assume we do have a lambda
lem = \refutation -> _hole
Now enumerate what could be under the lambda.
It could be a constructor,
which then has to be Refl, but there is no proof that Choose x y ~ 2 in
the context (here we could formalize and enumerate the type equalities the
typechecker knows about and can derive, or make the syntax of coercions
(proofs of equalities) explicit and keep playing this proof search game
with them), so this doesn't type check:
lem = \refutation -> Refl
Maybe there is some way of constructing that equality proof, but then the
expression would start with something else, which is going to be another
case of the proof.
It could be some application of a constructor C x1 x2 ..., or the
variable refutation (applied or not); but there's no possible way that's
well-typed, it has to somehow produce a (:~:), and Refl is really the
only way.
Or it could be a case. WLOG, there is no nested case on the left, nor any
constructor, because the expression could be simplified in both cases:
-- Any left-nested case expression
case (case (e) of { C x1 x2 -> (f) }) { D y1 y2 -> (g) }
-- is equivalent to a right-nested case
case (e) of { C x1 x2 -> case (f) of { D y1 y2 -> (g) } }
-- Any case expression with a nested constructor
case (C u v) of { C x1 x2 -> f x1 x2 }
-- reduces to
f u v
So the last subcase is the variable case:
lem = \refutation -> case refutation (_hole :: x :~: y) of {}
and we have to construct a x :~: y. We enumerate ways of filling the
_hole again. It's either Refl, but no proof is available, or
(skipping some steps) case refutation (_anotherHole :: x :~: y) of {},
and we have an infinite descent on our hands, which is also absurd.
A different possible argument here is that we can pull out the case
from the application, to remove this case from consideration WLOG.
-- Any application to a case
f (case e of C x1 x2 -> g x1 x2)
-- is equivalent to a case with the application inside
case e of C x1 x2 -> f (g x1 x2)
There are no more cases. The search is complete, and we didn't find an
implementation of (x :~: y -> Void) -> Choose x y :~: 2. QED.
To read more on this topic, I guess a course/book about lambda calculus up
until the normalization proof of the simply-typed lambda calculus should give
you the basic tools to start with. The following thesis contains an
introduction on the subject in its first part, but admittedly I'm a poor judge
of the difficulty of such material: Which types have a unique inhabitant?
Focusing on pure program equivalence,
by Gabriel Scherer.
Feel free to suggest more adequate resources and literature.
II. Fixing the proposition and proving it with dependent types
Your initial intuition that this should encode a valid proposition
is definitely valid. How might we fix it to make it provable?
Technically, the type we are looking at is quantified with forall:
forall x y. (x :~: y -> Void) -> Choose x y :~: 2
An important feature of forall is that it is an irrelevant quantifier.
The variables it introduces cannot be used "directly" in a term of this type. Although that aspect becomes more prominent in the presence of dependent types,
it still pervades Haskell today, providing another intuition for why this (and many other examples) is not "provable" in Haskell: if you think about why you
think that proposition is valid, you will naturally start with a case split about whether x is equal to y, but to even do such a case split you need a way to
decide which side you're on, which will of course have to look at x and y,
so they cannot be irrelevant. forall in Haskell is not at all like what most people mean with "for all".
Some discussion on the matter of relevance can be found in the thesis Dependent Types in Haskell, by Richard Eisenberg (in particular, Section 3.1.1.5 for an initial example, Section 4.3 for relevance in Dependent Haskell and Section 8.7 for comparison with other languages with dependent types).
Dependent Haskell will need a relevant quantifier to complement forall, and
which would get us closer to proving this:
foreach x y. (x :~: y -> Void) -> Choose x y :~: 2
Then we could probably write this:
lem :: foreach x y. (x :~: y -> Void) -> Choose x y :~: 2
lem x y p = case x ==? u of
Left r -> absurd (p r) -- x ~ y, that's absurd!
Right Irrefl -> Refl -- x /~ y, so Choose x y = 2
That also assumes a first-class notion of disequality /~, complementing ~,
to help Choose reduce when it is in the context and a decision function
(==?) :: foreach x y. Either (x :~: y) (x :/~: y).
Actually, that machinery isn't necessary, that just makes for a shorter
answer.
At this point I'm making stuff up because Dependent Haskell does not exist yet,
but that is easily doable in related dependently typed languages (Coq, Agda,
Idris, Lean), modulo an adequate replacement of the type family Choose
(type families are in some sense too powerful to be translated as mere
functions, so may be cheating, but I digress).
Here is a comparable program in Coq, showing also that lem applied to 1 and 2
and a suitable proof does reduce to a proof by reflexivity of choose 1 2 = 2.
https://gist.github.com/Lysxia/5a9b6996a3aae741688e7bf83903887b
III. Without dependent types
A critical source of difficulty here is that Choose is a closed type
family with overlapping instances. It is problematic because there is
no proper way to express the fact that x and y are not equal in Haskell,
to know that the first clause Choose x x does not apply.
A more fruitful avenue if you're into Pseudo-Dependent Haskell is to use a
boolean type equality:
-- In the base library
import Data.Type.Bool (If)
import Data.Type.Equality (type (==))
type Choose x y = If (x == y) 1 2
An alternative encoding of equality constraints becomes useful for this style:
type x ~~ y = ((x == y) ~ 'True)
type x /~ y = ((x == y) ~ 'False)
with that, we can get another version of the type-proposition above,
expressible in current Haskell (where SBool is the singleton type of Bool),
which essentially can be read as adding the assumption that the equality of x
and y is decidable. This does not contradict the earlier claim about "irrelevance" of forall, the function is inspecting a boolean (or rather an SBool), which postpones the inspection of x and y to whoever calls lem.
lem :: forall x y. SBool (x == y) -> ((x ~~ y) => Void) -> Choose x y :~: 2
lem decideEq p = case decideEq of
STrue -> absurd p
SFalse -> Refl

Generator, Selector Pattern to calculate approximations in Haskell

I am trying to implement a generator, selector pattern to approximately calculate square roots in haskell
My generator looks like this:
generator :: (Double -> Double) -> Double -> [Double]
generator f a = generator f (f a)
My selector:
selector :: Double -> [Double] -> Double
selector eps (a : b : r)
| abs(a - b) <= eps = b
| otherwise = selector eps (b : r)
And the approx function:
next :: Double -> Double -> Double
next n x = (x + n/x) / 2
Calling this like selector 0.1 (generator (next 5) 2)
should give me ...(next 5( next 5 (next 5 2))) so [2.25, 2.23611111111111, 2.2360679779158,...] since my eps parameter is 0.1 abs(a - b) <= eps should be true on the first execution giving me 2.23611111111111 as a result. I do however end in a endless loop.
Could somebody explain to me what is wrong in the implementation of those functions?
Thanks in advance
This definition
generator f a = generator f (f a)
never generates any list elements: it gets stuck into an infinite recursion instead. You probably want
generator f a = a : generator f (f a)
which makes a to be the first element, followed by all the others we generate using recursion.
It could also be beneficial to avoid putting unevaluated thunks in the list. To avoid that, one could use
generator f a = a `seq` (a : generator f (f a))
so that a is evaluated early. This should not matter much in your code, since the
selector immediately evaluates the thunks as soon as they are generated.
Your generator function is missing the a:, as chi's answer correctly points out. However, there's a better solution than just adding that. Get rid of generator altogether, and use the built-in method iterate instead (or iterate' from Data.List if you want to avoid unevaluated thunks). These methods have the same behavior that you want from generate, but support optimizations like list fusion that your own method won't. And of course, there's also the advantage that it's one less function that you have to write and maintain.

Symbolic theory proving using SBV and Haskell

I'm using SBV (with Z3 backend) in Haskell to create some theory provers. I want to check if forall x and y with given constrains (like x + y = y + x, where + is a "plus operator", not addition) some other terms are valid. I want to define axioms about the + expression (like associativity, identity etc.) and then check for further equalities, like check if a + (b + c) == (a + c) + b is valid formal a, b and c.
I was trying to accomplish it using something like:
main = do
let x = forall "x"
let y = forall "y"
out <- prove $ (x .== x)
print "end"
But it seems we cannot use the .== operator on symbolic values. Is this a missing feature or wrong usage? Are we able to do it somehow using SBV?
That sort of reasoning is indeed possible, through the use of uninterpreted sorts and functions. Be warned, however, that reasoning about such structures typically requires quantified axioms, and SMT-solvers are usually not terribly good at reasoning with quantifiers.
Having said that, here's how I would go about it, using SBV.
First, some boiler-plate code to get an uninterpreted type T:
{-# LANGUAGE DeriveDataTypeable #-}
import Data.Generics
import Data.SBV
-- Uninterpreted type T
data T = TBase () deriving (Eq, Ord, Data, Typeable, Read, Show)
instance SymWord T
instance HasKind T
type ST = SBV T
Once you do this, you'll have access to an uninterpreted type T and its symbolic counterpart ST. Let's declare plus and zero, again just uninterpreted constants with the right types:
-- Uninterpreted addition
plus :: ST -> ST -> ST
plus = uninterpret "plus"
-- Uninterpreted zero
zero :: ST
zero = uninterpret "zero"
So far, all we told SBV is that there exists a type T, and a function plus, and a constant zero; expressly being uninterpreted. That is, the SMT solver makes no assumptions other than the fact that they have the given types.
Let's first try to prove that 0+x = x:
bad = prove $ \x -> zero `plus` x .== x
If you try this, you'll get the following response:
*Main> bad
Falsifiable. Counter-example:
s0 = T!val!0 :: T
What the SMT solver is telling you is that the property does not hold, and here's a value where it doesn't hold. The value T!val!0 is a Z3 specific response; other solvers can return other things. It's essentially an internal identifier for a habitant of the type T; and other than that we know nothing about it. This isn't terribly useful of course, as you don't really know what associations it made for plus and zero, but it is to be expected.
To prove the property, let's tell the SMT solver two more things. First, that plus is commutative. And second, that zero added on the right doesn't do anything. These are done via addAxiom calls. Unfortunately, you have to write your axioms in the SMTLib syntax, as SBV doesn't (at least yet) support axioms written using Haskell. Note also we switch to using the Symbolic monad here:
good = prove $ do
addAxiom "plus-zero-axioms"
[ "(assert (forall ((x T) (y T)) (= (plus x y) (plus y x))))"
, "(assert (forall ((x T)) (= (plus x zero) x)))"
]
x <- free "x"
return $ zero `plus` x .== x
Note how we told the solver x+y = y+x and x+0 = x, and asked it to prove 0+x = x. Writing axioms this way looks really ugly since you have to use the SMTLib syntax, but that's the current state of affairs. Now we have:
*Main> good
Q.E.D.
Quantified axioms and uninterpreted-types/functions are not the easiest things to use via the SBV interface, but you can get some mileage out of it this way. If you have heavy use of quantifiers in your axioms, it's unlikely that the solver will be able to answer your queries; and will likely respond unknown. It all depends on the solver you use, and how hard the properties to prove are.
Your use of the API isn't quite right. The simplest way to prove mathematical equalities would be to use simple functions. For instance, associativity over unbounded Integers can be expressed this way:
prove $ \x y z -> x + (y + z) .== (x + y) + (z :: SInteger)
If you need a more programmatic interface (and sometimes you will), then you can use the Symbolic monad, thusly:
plusAssoc = prove $ do x <- sInteger "x"
y <- sInteger "y"
z <- sInteger "z"
return $ x + (y + z) .== (x + y) + z
I'd recommend browsing through many of the examples provided in the hackage site to get familiar with the API: https://hackage.haskell.org/package/sbv

Are there any Haskell libraries for integrating complex functions?

How to numerically integrate complex, complex-valued functions in Haskell?
Are there any existing libraries for it? numeric-tools operates only on reals.
I am aware that on complex plane there's only line integrals, so the interface I am interested in is something like this:
i = integrate f x a b precision
to calculate integral along straight line from a to b of function f on point x.
i, x, a, b are all of Complex Double or better Num a => Complex a type.
Please... :)
You can make something like this yourself. Suppose you have a function realIntegrate of type (Double -> Double) -> (Double,Double) -> Double, taking a function and a tuple containing the lower and upper bounds, returning the result to some fixed precision. You could define realIntegrate f (lo,hi) = quadRomberg defQuad (lo,hi) f using numeric-tools, for example.
Then we can make your desired function as follows - I'm ignoring the precision for now (and I don't understand what your x parameter is for!):
integrate :: (Complex Double -> Complex Double) -> Complex Double -> Complex Double -> Complex Double
integrate f a b = r :+ i where
r = realIntegrate realF (0,1)
i = realIntegrate imagF (0,1)
realF t = realPart (f (interpolate t)) -- or realF = realPart . f . interpolate
imagF t = imagPart (f (interpolate t))
interpolate t = a + (t :+ 0) * (b - a)
So we express the path from a to b as a function on the real interval from 0 to 1 by linear interpolation, take the value of f along that path, integrate the real and imaginary parts separately (I don't know if this can give numerically badly behaving results, though) and reassemble them into the final answer.
I haven't tested this code as I don't have numeric-tools installed, but at least it typechecks :-)

When are lambda forms necessary in Haskell?

I'm a newbie to Haskell, and a relative newbie to functional programming.
In other (besides Haskell) languages, lambda forms are often very useful.
For example, in Scheme:
(define (deriv-approx f)
(lambda (h x)
(/ (- (f (+ x h)
(f x)
h)))
Would create a closure (over the function f) to approximate a derivative (at value x, with interval h).
However, this usage of a lambda form doesn't seem to be necessary in Haskell, due to its partial application:
deriv-approx f h x = ( (f (x + h)) - (f x) ) / h
What are some examples where lambda forms are necessary in Haskell?
Edit: replaced 'closure' with 'lambda form'
I'm going to give two slightly indirect answers.
First, consider the following code:
module Lambda where
derivApprox f h x = ( (f (x + h)) - (f x) ) / h
I've compiled this while telling GHC to dump an intermediate representation, which is roughly a simplified version of Haskell used as part of the compilation process, to get this:
Lambda.derivApprox
:: forall a. GHC.Real.Fractional a => (a -> a) -> a -> a -> a
[LclIdX]
Lambda.derivApprox =
\ (# a) ($dFractional :: GHC.Real.Fractional a) ->
let {
$dNum :: GHC.Num.Num a
[LclId]
$dNum = GHC.Real.$p1Fractional # a $dFractional } in
\ (f :: a -> a) (h :: a) (x :: a) ->
GHC.Real./
# a
$dFractional
(GHC.Num.- # a $dNum (f (GHC.Num.+ # a $dNum x h)) (f x))
h
If you look past the messy annotations and verbosity, you should be able to see that the compiler has turned everything into lambda expressions. We can consider this an indication that you probably don't need to do so manually.
Conversely, let's consider a situation where you might need lambdas. Here's a function that uses a fold to compose a list of functions:
composeAll :: [a -> a] -> a -> a
composeAll = foldr (.) id
What's that? Not a lambda in sight! In fact, we can go the other way, as well:
composeAll' :: [a -> a] -> a -> a
composeAll' xs x = foldr (\f g x -> f (g x)) id xs x
Not only is this full of lambdas, it's also taking two arguments to the main function and, what's more, applying foldr to all of them. Compare the type of foldr, (a -> b -> b) -> b -> [a] -> b, to the above; apparently it takes three arguments, but above we've applied it to four! Not to mention that the accumulator function takes two arguments, but we have a three argument lambda here. The trick, of course, is that both are returning a function that takes a single argument; and we're simply applying that argument on the spot, instead of juggling lambdas around.
All of which, hopefully, has convinced you that the two forms are equivalent. Lambda forms are never necessary, or perhaps always necessary, because who can tell the difference?
There is no semantic difference between
f x y z w = ...
and
f x y = \z w -> ...
The main difference between expression style (explicit lambdas) and declaration style is a syntactic one. One situation where it matters is when you want to use a where clause:
f x y = \z w -> ...
where ... -- x and y are in scope, z and w are not
It is indeed possible to write any Haskell program without using an explicit lambda anywhere by replacing them with named local functions or partial application.
See also: Declaration vs. expression style.
When you can declare named curried functions (such as your Haskell deriv-approx) it is never necessary to use an explicit lambda expression. Every explicit lambda expression can be replaced with a partial application of a named function that takes the free variables of the lambda expression as its first parameters.
Why one would want to do this in source code is not easy to see, but some implementations essentially work that way.
Also, somewhat beside the point, would the following rewriting (different from what I've just described) count as avoiding lambdas for you?
deriv-approx f = let myfunc h x = (f(x+h)-(f x))/h in myfunc
If you only use a function once, e.g. as a parameter to map or foldr or some other higher-order function, then it is often better to use a lambda than a named function, because it immediately becomes clear that the function isn't used anywhere else - it can't be, because it doesn't have a name. When you introduce a new named function, you give people reading your code another thing to remember for the duration of the scope. So lambdas are never strictly speaking necessary, but they are often preferable to the alternative.

Resources