How to compare two functions for equivalence, as in (λx.2*x) == (λx.x+x)? - haskell

Is there a way to compare two functions for equality? For example, (λx.2*x) == (λx.x+x) should return true, because those are obviously equivalent.

It's pretty well-known that general function equality is undecidable in general, so you'll have to pick a subset of the problem that you're interested in. You might consider some of these partial solutions:
Presburger arithmetic is a decidable fragment of first-order logic + arithmetic.
The universe package offers function equality tests for total functions with finite domain.
You can check that your functions are equal on a whole bunch of inputs and treat that as evidence for equality on the untested inputs; check out QuickCheck.
SMT solvers make a best effort, sometimes responding "don't know" instead of "equal" or "not equal". There are several bindings to SMT solvers on Hackage; I don't have enough experience to suggest a best one, but Thomas M. DuBuisson suggests sbv.
There's a fun line of research on deciding function equality and other things on compact functions; the basics of this research is described in the blog post Seemingly impossible functional programs. (Note that compactness is a very strong and very subtle condition! It's not one that most Haskell functions satisfy.)
If you know your functions are linear, you can find a basis for the source space; then every function has a unique matrix representation.
You could attempt to define your own expression language, prove that equivalence is decidable for this language, and then embed that language in Haskell. This is the most flexible but also the most difficult way to make progress.

This is undecidable in general, but for a suitable subset, you can indeed do it today effectively using SMT solvers:
$ ghci
GHCi, version 8.0.1: http://www.haskell.org/ghc/ :? for help
Prelude> :m Data.SBV
Prelude Data.SBV> (\x -> 2 * x) === (\x -> x + x :: SInteger)
Q.E.D.
Prelude Data.SBV> (\x -> 2 * x) === (\x -> 1 + x + x :: SInteger)
Falsifiable. Counter-example:
s0 = 0 :: Integer
For details, see: https://hackage.haskell.org/package/sbv

In addition to practical examples given in the other answer, let us pick the subset of functions expressible in typed lambda calculus; we can also allow product and sum types. Although checking whether two functions are equal can be as simple as applying them to a variable and comparing results, we cannot build the equality function within the programming language itself.
ETA: λProlog is a logic programming language for manipulating (typed lambda calculus) functions.

2 years have passed, but I want to add a little remark to this question. Originally, I asked if there is any way to tell if (λx.2*x) is equal to (λx.x+x). Addition and multiplication on the λ-calculus can be defined as:
add = (a b c -> (a b (a b c)))
mul = (a b c -> (a (b c)))
Now, if you normalize the following terms:
add_x_x = (λx . (add x x))
mul_x_2 = (mul (λf x . (f (f x)))
You get:
result = (a b c -> (a b (a b c)))
For both programs. Since their normal forms are equal, both programs are obviously equal. While this doesn't work in general, it does work for many terms in practice. (λx.(mul 2 (mul 3 x)) and (λx.(mul 6 x)) both have the same normal forms, for example.

In a language with symbolic computation like Mathematica:
Or C# with a computer algebra library:
MathObject f(MathObject x) => x + x;
MathObject g(MathObject x) => 2 * x;
{
var x = new Symbol("x");
Console.WriteLine(f(x) == g(x));
}
The above displays 'True' at the console.

Proving two functions equal is undecidable in general but one can still prove functional equality in special cases as in your question.
Here's a sample proof in Lean
def foo : (λ x, 2 * x) = (λ x, x + x) :=
begin
apply funext, intro x,
cases x,
{ refl },
{ simp,
dsimp [has_mul.mul, nat.mul],
have zz : ∀ a : nat, 0 + a = a := by simp,
rw zz }
end
One can do the same in other dependently typed language such as Coq, Agda, Idris.
The above is a tactic style proof. The actual definition of foo (the proof) that gets generated is quite a mouthful to be written by hand:
def foo : (λ (x : ℕ), 2 * x) = λ (x : ℕ), x + x :=
funext
(λ (x : ℕ),
nat.cases_on x (eq.refl (2 * 0))
(λ (a : ℕ),
eq.mpr
(id_locked
((λ (a a_1 : ℕ) (e_1 : a = a_1) (a_2 a_3 : ℕ) (e_2 : a_2 = a_3), congr (congr_arg eq e_1) e_2)
(2 * nat.succ a)
(nat.succ a * 2)
(mul_comm 2 (nat.succ a))
(nat.succ a + nat.succ a)
(nat.succ a + nat.succ a)
(eq.refl (nat.succ a + nat.succ a))))
(id_locked
(eq.mpr
(id_locked
(eq.rec (eq.refl (0 + nat.succ a + nat.succ a = nat.succ a + nat.succ a))
(eq.mpr
(id_locked
(eq.trans
(forall_congr_eq
(λ (a : ℕ),
eq.trans
((λ (a a_1 : ℕ) (e_1 : a = a_1) (a_2 a_3 : ℕ) (e_2 : a_2 = a_3),
congr (congr_arg eq e_1) e_2)
(0 + a)
a
(zero_add a)
a
a
(eq.refl a))
(propext (eq_self_iff_true a))))
(propext (implies_true_iff ℕ))))
trivial
(nat.succ a))))
(eq.refl (nat.succ a + nat.succ a))))))

Related

Perplexing behaviour when approximating the derivative in haskell

I have defined a typeclass Differentiable to be implemented by any type which can operate on infinitesimals.
Here is an example:
class Fractional a => Differentiable a where
dif :: (a -> a) -> (a -> a)
difs :: (a -> a) -> [a -> a]
difs = iterate dif
instance Differentiable Double where
dif f x = (f (x + dx) - f(x)) / dx
where dx = 0.000001
func :: Double -> Double
func = exp
I have also defined a simple Double -> Double function to differentiate.
But when I test this in the ghc this happens:
... $ ghci
GHCi, version 8.8.4: https://www.haskell.org/ghc/ :? for help
Prelude> :l testing
[1 of 1] Compiling Main ( testing.hs, interpreted )
Ok, one module loaded.
*Main> :t func
func :: Double -> Double
*Main> derivatives = difs func
*Main> :t derivatives
derivatives :: [Double -> Double]
*Main> terms = map (\f -> f 0) derivatives
*Main> :t terms
terms :: [Double]
*Main> take 5 terms
[1.0,1.0000004999621837,1.000088900582341,-222.0446049250313,4.440892098500626e8]
*Main>
The approximations to the nth derivative of e^x|x=0 are:
[1.0,1.0000004999621837,1.000088900582341,-222.0446049250313,4.440892098500626e8]
The first and 2nd derivatives are perfectly reasonable approximations given the setup, but suddenly, the third derivative of func at 0 is... -222.0446049250313! HOW!!?
The method you're using here is a finite difference method of 1st-order accuracy.
Layman's translation: it works, but is pretty rubbish numerically speaking. Specifically, because it's only 1st-order accurate, you need those really small steps to get good accuracy even with exact-real-arithmetic. You did choose a small step size so that's fine, but small step size brings in another problem: rounding errors. You need to take the difference f (x+δx) - f x with small δx, meaning the difference is small whereas the individual values may be large. That always brings up the floating-point inaccuracy – consider for example
Prelude> (1 + pi*1e-13) - 1
3.141931159689193e-13
That might not actually hurt that much, but since you then need to divide by δx you boost up the error.
This issue just gets worse/compounded as you go to the higher derivatives, because now each of the f' x and f' (x+δx) has already an (non-identical!) boosted error on it, so taking the difference and boosting again is a clear recipe for disaster.
The simplest way to remediate the problem is to switch to a 2nd-order accurate method, the obvious being central difference. Then you can make the step a lot bigger, and thus largely avoid rounding issues:
Prelude> let dif f x = (f (x + δx) - f(x - δx)) / (2*δx) where δx = 1e-3
Prelude> take 8 $ ($0) <$> iterate dif exp
[1.0,1.0000001666666813,1.0000003333454632,1.0000004990740052,0.9999917560676863,0.9957312752106873,8.673617379884035,7806.255641895632]
You see the first couple of derivatives are good now, but then eventually it also becomes unstable – and this will happen with any FD method as you iterate it. But that's anyway not really a good approach: note that every evaluation of the n-th derivative requires 2 evaluations of the n−1-th. So, the complexity is exponential in the derivative degree.
A better approach to approximate the n-th derivative of an opaque function is to fit an n-th order polynomial to it and differentiate this symbolically/automatically. Or, if the function is not opaque, differentiate itself symbolically/automatically.
tl;dr: the dx denominator gets small exponentially quickly, which means that even small errors in the numerator get blown out of proportion.
Let's do some equational reasoning on the first "bad" approximation, the third derivative.
dif (dif (dif exp))
= { definition of dif }
dif (dif (\x -> (exp (x+dx) - exp x)/dx))
= { definition of dif }
dif (\y -> ((\x -> (exp (x+dx) - exp x)/dx) (y+dx)
- (\x -> (exp (x+dx) - exp x)/dx) y
)/dx)
= { questionable algebra }
dif (\y -> (exp (y + 2*dx) - 2*exp (y + dx) + exp y)/dx^2)
= { alpha }
dif (\x -> (exp (x + 2*dx) - 2*exp (x + dx) + exp x)/dx^2)
= { definition of dif and questionable algebra }
\x -> (exp (x + 3*dx) - 3*exp (x + 2*dx) + 3*exp (x + dx) - exp x)/dx^3
Hopefully by now you can see the pattern we're getting into: as we take more and more derivatives, the error in the numerator gets worse (because we are computing exp farther and farther away from the original point, x + 3*dx is three times as far away e.g.) while the sensitivity to error in the denominator gets higher (because we are computing dx^n for the nth derivative). By the third derivative, these two factors become untenable:
> exp (3*dx) - 3*exp (2*dx) + 3*exp (dx) - exp 0
-4.440892098500626e-16
> dx^3
9.999999999999999e-19
So you can see that, although the error in the numerator is only about 5e-16, the sensitivity to error in the denominator is so high that you start to see nonsensical answers.

Open Type Level Proofs in Haskell/Idris

In Idris/Haskell, one can prove properties of data by annotating the types and using GADT constructors, such as with Vect, however, this requires hardcoding the property into the type (e.g. a Vect has to be a separate type from a List).
Is it possible to have types with an open set of properties (such as list carrying both a length and running average), for example by overloading constructors or using something in the vein of Effect?
I believe that McBride has answered that question (for Type Theory) in his ornament paper (pdf). The concept you are looking for is the one of an algebraic ornament (emphasis mine):
An algebra φ describes a structural method to interpret data, giving
rise to a fold φ oper- ation, applying the method recursively.
Unsurprisingly, the resulting tree of calls to φ has the same
structure as the original data — that is the point, after all. But
what if that were, before all, the point? Suppose we wanted to fix the
result of fold φ in advance, representing only those data which would
deliver the answer we wanted. We should need the data to fit with a
tree of φ calls which delivers that answer. Can we restrict our data
to exactly that? Of course we can, if we index by the answer.
Now, let's write some code. I've put the whole thing in a gist because I'm going to interleave comments in here. Also, I'm using Agda but it should be easy to tranlate to Idris.
module reornament where
We start by defining what it means to be an algebra delivering Bs acting on a list of As. We need a base case (a value of type B) as well as way to combine the head of the list with the induction hypothesis.
ListAlg : (A B : Set) → Set
ListAlg A B = B × (A → B → B)
Given this definition, we can define a type of lists of As indexed by Bs whose value is precisely the result of the computation corresponding to a given ListAlg A B. In the nil case, the result is the base case provided to us by the algebra (proj₁ alg) whilst in the cons case, we simply combine the induction hypothesis with the new head using the second projection:
data ListSpec (A : Set) {B : Set} (alg : ListAlg A B) : (b : B) → Set where
nil : ListSpec A alg (proj₁ alg)
cons : (a : A) {b : B} (as : ListSpec A alg b) → ListSpec A alg (proj₂ alg a b)
Ok, let's import some libraries and see a couple of examples now:
open import Data.Product
open import Data.Nat
open import Data.List
The algebra computing the length of a list is given by 0 as the base case and const suc as the way to combine an A and the length of the tail to build the length of the current list. Hence:
AlgLength : {A : Set} → ListAlg A ℕ
AlgLength = 0 , (λ _ → suc)
If the elements are natural numbers then they can be summed. The algebra corresponding to that has 0 as the base case and _+_ as the way to combine an ℕ together with the sum of the elements contained in the tail. Hence:
AlgSum : ListAlg ℕ ℕ
AlgSum = 0 , _+_
Crazy thought: If we have two algebras working on the same elements, we can combine them! This way we'll track 2 invariants rather than one!
Alg× : {A B C : Set} (algB : ListAlg A B) (algC : ListAlg A C) →
ListAlg A (B × C)
Alg× (b , sucB) (c , sucC) = (b , c) , (λ a → λ { (b , c) → sucB a b , sucC a c })
An now the examples:
If we are tracking the length, then we can define Vectors:
Vec : (A : Set) (n : ℕ) → Set
Vec A n = ListSpec A AlgLength n
And have, e.g., this vector of length 4:
allFin4 : Vec ℕ 4
allFin4 = cons 0 (cons 1 (cons 2 (cons 3 nil)))
If we are tracking the sum of the elements, then we can define a notion of distribution: a statistical distribution is a list of probabilities whose sum is 100:
Distribution : Set
Distribution = ListSpec ℕ AlgSum 100
And we can define a uniform one for instance:
uniform : Distribution
uniform = cons 25 (cons 25 (cons 25 (cons 25 nil)))
Finally, by combining the length and sum algebras, we can implement the notion of sized distribution.
SizedDistribution : ℕ → Set
SizedDistribution n = ListSpec ℕ (Alg× AlgLength AlgSum) (n , 100)
And give this nice uniform distribution for a 4 elements set:
uniform4 : SizedDistribution 4
uniform4 = cons 25 (cons 25 (cons 25 (cons 25 nil)))

What are structures with "subtraction" but no inverse?

A group extends the idea of a monoid to allow for inverses. This allows for:
gremove :: (Group a) => a -> a -> a
gremove x y = x `mappend` (invert y)
But what about structures like natural numbers, where there is no inverse? I'm thinking about:
class (Monoid a) => MRemove a where
mremove :: a -> a -> a
with laws:
x `mremove` x = mempty
x `mremove` mempty = x
(x `mappend` y) `mremove` y = x
And additionally:
class (MRemove a) => Group a where
invert :: a -> a
invert x = mempty `mremove` x
-- | For defining MRemove in terms of Group
defaultMRemove :: (Group a) => a -> a -> a
defaultMRemove x y = x `mappend` (invert y)
So, my question is: what is MRemove?
The closest common structure I can think of is a torsor, but it doesn't really apply to naturals in an obvious way. Think of the operations you can perform on time values:
"Subtract" two times, yielding an interval of time (a different type)
Add an interval of time to a time to get another time
Add or subtract intervals of time to get another interval
Very few other operations on pairs of time values make sense. You can't add times, or multiply them, or anything we're used to in algebra. On the other hand, the interval type is much more flexible, supporting addition, subtraction, inversion, and so on. A torsor could thus be defined in Haskell as:
class Group (Diff a) => Torsor a where
type Diff a
subtract : a -> a -> Diff a
add : a -> Diff a -> a
Anyway, that's an attempt at answering your direct question (you can find more at John Baez's excellent page on them), even though it doesn't cover your natural example.
The only other thing that comes close to answering your question, as far as I know, is the solution to code reuse in Coq's (semi)ring solver tactic. They introduce a notion of an "almost ring" with axioms similar to the ones you describe, to allow them to reuse most of their code for naturals as well as full rings. I don't think the idea is very widespread, though.
The name you're looking for is cancellative monoid, though strictly speaking a cancellative semigroup is enough to capture the concept of subtraction. I was wondering about the very same question a year or so ago, and I found the answer by digging through mathematical jargon. Have a look at the CancellativeMonoid class in the incremental-parser package. I'm currently preparing a new package that would contain only the monoid subclasses and a few of their instances, and I hope to release it soon.
A similar question has been asked here. The answer given there is a commutative monoid with monus.
EDIT: This answer is wrong. See my comment below. I'm preserving the answer in case it is interesting.
Take a look at subtraction semigroups. It's a semigroup with a subtraction operator that obeys these laws:
x - (y - x) = x
x - (x - y) = y - (y - x)
(x - y) - z = (x - z) - y
x <> (y - z) = (x <> y) - (x <> z)
(y - z) <> x = (y <> x) - (z <> x)
Sadly, I cannot find resources that discussion a "subtraction monoid", but I assume it would need to obey the following additional law:
x - x = 0

Why inductive datatypes forbid types like `data Bad a = C (Bad a -> a)` where the type recursion occurs in front of ->?

Agda manual on Inductive Data Types and Pattern Matching states:
To ensure normalisation, inductive occurrences must appear in strictly positive positions. For instance, the following datatype is not allowed:
data Bad : Set where
bad : (Bad → Bad) → Bad
since there is a negative occurrence of Bad in the argument to the constructor.
Why is this requirement necessary for inductive data types?
The data type you gave is special in that it is an embedding of the untyped lambda calculus.
data Bad : Set where
bad : (Bad → Bad) → Bad
unbad : Bad → (Bad → Bad)
unbad (bad f) = f
Let's see how. Recall, the untyped lambda calculus has these terms:
e := x | \x. e | e e'
We can define a translation [[e]] from untyped lambda calculus terms to Agda terms of type Bad (though not in Agda):
[[x]] = x
[[\x. e]] = bad (\x -> [[e]])
[[e e']] = unbad [[e]] [[e']]
Now you can use your favorite non-terminating untyped lambda term to produce a non-terminating term of type Bad. For example, we could translate (\x. x x) (\x. x x) to the non-terminating expression of type Bad below:
unbad (bad (\x -> unbad x x)) (bad (\x -> unbad x x))
Although the type happened to be a particularly convenient form for this argument, it can be generalized with a bit of work to any data type with negative occurrences of recursion.
An example how such a data type allows us to inhabit any type is given in Turner, D.A. (2004-07-28), Total Functional Programming, sect. 3.1, page 758 in Rule 2: Type recursion must be covariant."
Let's make a more elaborate example using Haskell. We'll start with a "bad" recursive data type
data Bad a = C (Bad a -> a)
and construct the Y combinator from it without any other form of recursion. This means that having such a data type allows us to construct any kind of recursion, or inhabit any type by an infinite recursion.
The Y combinator in the untyped lambda calculus is defined as
Y = λf.(λx.f (x x)) (λx.f (x x))
The key to it is that we apply x to itself in x x. In typed languages this is not directly possible, because there is no valid type x could possibly have. But our Bad data type allows this modulo adding/removing the constructor:
selfApp :: Bad a -> a
selfApp (x#(C x')) = x' x
Taking x :: Bad a, we can unwrap its constructor and apply the function inside to x itself. Once we know how to do this, it's easy to construct the Y combinator:
yc :: (a -> a) -> a
yc f = let fxx = C (\x -> f (selfApp x)) -- this is the λx.f (x x) part of Y
in selfApp fxx
Note that neither selfApp nor yc are recursive, there is no recursive call of a function to itself. Recursion appears only in our recursive data type.
We can check that the constructed combinator indeed does what it should. We can make an infinite loop:
loop :: a
loop = yc id
or compute let's say GCD:
gcd' :: Int -> Int -> Int
gcd' = yc gcd0
where
gcd0 :: (Int -> Int -> Int) -> (Int -> Int -> Int)
gcd0 rec a b | c == 0 = b
| otherwise = rec b c
where c = a `mod` b

Implementing Iota in Haskell

Iota is a ridiculously small "programming language" using only one combinator. I'm interested in understanding how it works, but it would be helpful to see the implementation in a language I'm familiar with.
I found an implementation of the Iota programming language written in Scheme. I've been having a little trouble translating it to Haskell though. It's rather simple, but I'm relatively new to both Haskell and Scheme.
How would you write an equivalent Iota implementation in Haskell?
(let iota ()
(if (eq? #\* (read-char)) ((iota)(iota))
(lambda (c) ((c (lambda (x) (lambda (y) (lambda (z) ((x z)(y z))))))
(lambda (x) (lambda (y) x))))))
I've been teaching myself some of this stuff, so I sure hope I get the following right...
As n.m. mentions, the fact that Haskell is typed is of enormous importance to this question; type systems restrict what expressions can be formed, and in particular the most basic type systems for the lambda calculus forbid self-application, which ends up giving you a non-Turing complete language. Turing completeness is added on top of the basic type system as an extra feature to the language (either a fix :: (a -> a) -> a operator or recursive types).
This doesn't mean you can't implement this at all in Haskell, but rather that such an implementation is not going to have just one operator.
Approach #1: implement the second example one-point combinatory logic basis from here, and add a fix function:
iota' :: ((t1 -> t2 -> t1)
-> ((t5 -> t4 -> t3) -> (t5 -> t4) -> t5 -> t3)
-> (t6 -> t7 -> t6)
-> t)
-> t
iota' x = x k s k
where k x y = x
s x y z = x z (y z)
fix :: (a -> a) -> a
fix f = let result = f result in result
Now you can write any program in terms of iota' and fix. Explaining how this works is a bit involved. (EDIT: note that this iota' is not the same as the λx.x S K in the original question; it's λx.x K S K, which is also Turing-complete. It is the case that iota' programs are going to be different from iota programs. I've tried the iota = λx.x S K definition in Haskell; it typechecks, but when you try k = iota (iota (iota iota)) and s = iota (iota (iota (iota iota))) you get type errors.)
Approach #2: Untyped lambda calculus denotations can be embedded into Haskell using this recursive type:
newtype D = In { out :: D -> D }
D is basically a type whose elements are functions from D to D. We have In :: (D -> D) -> D to convert a D -> D function into a plain D, and out :: D -> (D -> D) to do the opposite. So if we have x :: D, we can self-apply it by doing out x x :: D.
Give that, now we can write:
iota :: D
iota = In $ \x -> out (out x s) k
where k = In $ \x -> In $ \y -> x
s = In $ \x -> In $ \y -> In $ \z -> out (out x z) (out y z)
This requires some "noise" from the In and out; Haskell still forbids you to apply a D to a D, but we can use In and out to get around this. You can't actually do anything useful with values of type D, but you could design a useful type around the same pattern.
EDIT: iota is basically λx.x S K, where K = λx.λy.x and S = λx.λy.λz.x z (y z). I.e., iota takes a two-argument function and applies it to S and K; so by passing a function that returns its first argument you get S, and by passing a function that returns its second argument you get K. So if you can write the "return first argument" and the "return second argument" with iota, you can write S and K with iota. But S and K are enough to get Turing completeness, so you also get Turing completeness in the bargain. It does turn out that you can write the requisite selector functions with iota, so iota is enough for Turing completeness.
So this reduces the problem of understanding iota to understanding the SK calculus.

Resources