proving Predicate logic with Isabelle - predicate

I'm trying to prove the following lemma:
lemma myLemma6: "(∀x. A(x) ∧ B(x))= ((∀x. A(x)) ∧ (∀x. B(x)))"
I'm trying to start by eliminating the forall quantifiers, so here's what I tried:
lemma myLemma6: "(∀x. A(x) ∧ B(x))= ((∀x. A(x)) ∧ (∀x. B(x)))"
apply(rule iffI)
apply ( erule_tac x="x" in allE)
apply (rule allE)
(*goal now: get rid of conj on both sides and the quantifiers on right*)
apply (erule conjE) (*isn't conjE supposed to be used with elim/erule?*)
apply (rule allI)
apply (assumption)
apply ( rule conjI) (*at this point, the following starts to make no sense... *)
apply (rule conjE) (*should be erule?*)
apply ( rule conjI)
apply ( rule conjI)
...
At the end I just started to act depending on the outcome of the previous apply, but it seems wrong to me, probably because there's some mistake in the beginning... Could someone please explain to me my error and how to finish this proof correctly?
Thanks in advance

Eliminating the universal quantifier at this early stage is not a good idea because you don't even have any value that you could plug in at that point (the x that you give is not in scope at that point, which is why it is printed with that orange background in Isabelle/jEdit).
After you do iffI you have two goals:
goal (2 subgoals):
1. ∀x. A x ∧ B x ⟹ (∀x. A x) ∧ (∀x. B x)
2. (∀x. A x) ∧ (∀x. B x) ⟹ ∀x. A x ∧ B x
Let's focus on the first one for now. You should first apply the introduction rules on the right-hand side, namely conjI and allI. That leaves you with
goal (3 subgoals):
1. ⋀x. ∀x. A x ∧ B x ⟹ A x
2. ⋀x. ∀x. A x ∧ B x ⟹ B x
3. (∀x. A x) ∧ (∀x. B x) ⟹ ∀x. A x ∧ B x
Now you can apply allE instantiated with x and the first goal becomes ⋀x. A x ∧ B x ⟹ A x, which you can then solve with erule conjE and assumption. The second goal works similarly.
For the last goal, it is similar again: apply the introduction rules first, then apply the elimination rules and assumption and you're done.
Of course, all the standard provers for Isabelle such as auto, force, blast and even the simple ones like metis, meson, iprover can easily solve this automatically, but that's probably not what you were going for here.

Related

How to compare two functions for equivalence, as in (λx.2*x) == (λx.x+x)?

Is there a way to compare two functions for equality? For example, (λx.2*x) == (λx.x+x) should return true, because those are obviously equivalent.
It's pretty well-known that general function equality is undecidable in general, so you'll have to pick a subset of the problem that you're interested in. You might consider some of these partial solutions:
Presburger arithmetic is a decidable fragment of first-order logic + arithmetic.
The universe package offers function equality tests for total functions with finite domain.
You can check that your functions are equal on a whole bunch of inputs and treat that as evidence for equality on the untested inputs; check out QuickCheck.
SMT solvers make a best effort, sometimes responding "don't know" instead of "equal" or "not equal". There are several bindings to SMT solvers on Hackage; I don't have enough experience to suggest a best one, but Thomas M. DuBuisson suggests sbv.
There's a fun line of research on deciding function equality and other things on compact functions; the basics of this research is described in the blog post Seemingly impossible functional programs. (Note that compactness is a very strong and very subtle condition! It's not one that most Haskell functions satisfy.)
If you know your functions are linear, you can find a basis for the source space; then every function has a unique matrix representation.
You could attempt to define your own expression language, prove that equivalence is decidable for this language, and then embed that language in Haskell. This is the most flexible but also the most difficult way to make progress.
This is undecidable in general, but for a suitable subset, you can indeed do it today effectively using SMT solvers:
$ ghci
GHCi, version 8.0.1: http://www.haskell.org/ghc/ :? for help
Prelude> :m Data.SBV
Prelude Data.SBV> (\x -> 2 * x) === (\x -> x + x :: SInteger)
Q.E.D.
Prelude Data.SBV> (\x -> 2 * x) === (\x -> 1 + x + x :: SInteger)
Falsifiable. Counter-example:
s0 = 0 :: Integer
For details, see: https://hackage.haskell.org/package/sbv
In addition to practical examples given in the other answer, let us pick the subset of functions expressible in typed lambda calculus; we can also allow product and sum types. Although checking whether two functions are equal can be as simple as applying them to a variable and comparing results, we cannot build the equality function within the programming language itself.
ETA: λProlog is a logic programming language for manipulating (typed lambda calculus) functions.
2 years have passed, but I want to add a little remark to this question. Originally, I asked if there is any way to tell if (λx.2*x) is equal to (λx.x+x). Addition and multiplication on the λ-calculus can be defined as:
add = (a b c -> (a b (a b c)))
mul = (a b c -> (a (b c)))
Now, if you normalize the following terms:
add_x_x = (λx . (add x x))
mul_x_2 = (mul (λf x . (f (f x)))
You get:
result = (a b c -> (a b (a b c)))
For both programs. Since their normal forms are equal, both programs are obviously equal. While this doesn't work in general, it does work for many terms in practice. (λx.(mul 2 (mul 3 x)) and (λx.(mul 6 x)) both have the same normal forms, for example.
In a language with symbolic computation like Mathematica:
Or C# with a computer algebra library:
MathObject f(MathObject x) => x + x;
MathObject g(MathObject x) => 2 * x;
{
var x = new Symbol("x");
Console.WriteLine(f(x) == g(x));
}
The above displays 'True' at the console.
Proving two functions equal is undecidable in general but one can still prove functional equality in special cases as in your question.
Here's a sample proof in Lean
def foo : (λ x, 2 * x) = (λ x, x + x) :=
begin
apply funext, intro x,
cases x,
{ refl },
{ simp,
dsimp [has_mul.mul, nat.mul],
have zz : ∀ a : nat, 0 + a = a := by simp,
rw zz }
end
One can do the same in other dependently typed language such as Coq, Agda, Idris.
The above is a tactic style proof. The actual definition of foo (the proof) that gets generated is quite a mouthful to be written by hand:
def foo : (λ (x : ℕ), 2 * x) = λ (x : ℕ), x + x :=
funext
(λ (x : ℕ),
nat.cases_on x (eq.refl (2 * 0))
(λ (a : ℕ),
eq.mpr
(id_locked
((λ (a a_1 : ℕ) (e_1 : a = a_1) (a_2 a_3 : ℕ) (e_2 : a_2 = a_3), congr (congr_arg eq e_1) e_2)
(2 * nat.succ a)
(nat.succ a * 2)
(mul_comm 2 (nat.succ a))
(nat.succ a + nat.succ a)
(nat.succ a + nat.succ a)
(eq.refl (nat.succ a + nat.succ a))))
(id_locked
(eq.mpr
(id_locked
(eq.rec (eq.refl (0 + nat.succ a + nat.succ a = nat.succ a + nat.succ a))
(eq.mpr
(id_locked
(eq.trans
(forall_congr_eq
(λ (a : ℕ),
eq.trans
((λ (a a_1 : ℕ) (e_1 : a = a_1) (a_2 a_3 : ℕ) (e_2 : a_2 = a_3),
congr (congr_arg eq e_1) e_2)
(0 + a)
a
(zero_add a)
a
a
(eq.refl a))
(propext (eq_self_iff_true a))))
(propext (implies_true_iff ℕ))))
trivial
(nat.succ a))))
(eq.refl (nat.succ a + nat.succ a))))))

Square of the sum minus sum of the squares in J (or how to take the train?)

Still in the learning process of J... The problem to solve is now to express the square of the sum minus the sum of the squares of natural integers.
The naive solution is
(*:+/>:i.100) - (+/*:>:i.100)
Now, I want to use a fork to be able to write the list >:i.100 only one time. My fork should like to:
h
/ \
f g
| |
x x
where f is the square of the sum, g is the sum of the squares, and h is minus. So, naively, I wrote:
((*:+/) - (+/*:)) >:i.100
but it gives me a domain error. Why? I also tried:
(+/ (*: - +/) :*) >: i.100
and this time, it gives me a long list of numbers... I guess it has something to do with the # conjunction, but I still don't figure out what the At does... Continuing my quest, I finally got
((+/*+/) - +/#:*:) >:i.100
but I don't like the fact I manually compute the squares instead of using the *: operator, and I don't really understand why I need the #: conjunction. Could somebody gives me some light about this problem?
(+/*:) and (*:+/) don't do what you think they do.
Actually, your f is Q (S x) (square of sum of x) and your g is S (Q x) (sum of square of x). You can see that for any f,g it is f (g y) = (f #: g) y.
So, you can write
(Q (S x)) h (S (Q x))
as
((Q #: S) x) h ((S #: Q) X)
which is now equivalent to
(f x) h (g x)
or
(f h g) x
Thus,
((*: #: (+/)) - (+/ #: *:)) >: i.1000
Note also that *: #: (+/) is not the same as *: #: +/, since +/ is not one verb (like *:) but a composite verb from a verb (+) and an adverb (/).

What are structures with "subtraction" but no inverse?

A group extends the idea of a monoid to allow for inverses. This allows for:
gremove :: (Group a) => a -> a -> a
gremove x y = x `mappend` (invert y)
But what about structures like natural numbers, where there is no inverse? I'm thinking about:
class (Monoid a) => MRemove a where
mremove :: a -> a -> a
with laws:
x `mremove` x = mempty
x `mremove` mempty = x
(x `mappend` y) `mremove` y = x
And additionally:
class (MRemove a) => Group a where
invert :: a -> a
invert x = mempty `mremove` x
-- | For defining MRemove in terms of Group
defaultMRemove :: (Group a) => a -> a -> a
defaultMRemove x y = x `mappend` (invert y)
So, my question is: what is MRemove?
The closest common structure I can think of is a torsor, but it doesn't really apply to naturals in an obvious way. Think of the operations you can perform on time values:
"Subtract" two times, yielding an interval of time (a different type)
Add an interval of time to a time to get another time
Add or subtract intervals of time to get another interval
Very few other operations on pairs of time values make sense. You can't add times, or multiply them, or anything we're used to in algebra. On the other hand, the interval type is much more flexible, supporting addition, subtraction, inversion, and so on. A torsor could thus be defined in Haskell as:
class Group (Diff a) => Torsor a where
type Diff a
subtract : a -> a -> Diff a
add : a -> Diff a -> a
Anyway, that's an attempt at answering your direct question (you can find more at John Baez's excellent page on them), even though it doesn't cover your natural example.
The only other thing that comes close to answering your question, as far as I know, is the solution to code reuse in Coq's (semi)ring solver tactic. They introduce a notion of an "almost ring" with axioms similar to the ones you describe, to allow them to reuse most of their code for naturals as well as full rings. I don't think the idea is very widespread, though.
The name you're looking for is cancellative monoid, though strictly speaking a cancellative semigroup is enough to capture the concept of subtraction. I was wondering about the very same question a year or so ago, and I found the answer by digging through mathematical jargon. Have a look at the CancellativeMonoid class in the incremental-parser package. I'm currently preparing a new package that would contain only the monoid subclasses and a few of their instances, and I hope to release it soon.
A similar question has been asked here. The answer given there is a commutative monoid with monus.
EDIT: This answer is wrong. See my comment below. I'm preserving the answer in case it is interesting.
Take a look at subtraction semigroups. It's a semigroup with a subtraction operator that obeys these laws:
x - (y - x) = x
x - (x - y) = y - (y - x)
(x - y) - z = (x - z) - y
x <> (y - z) = (x <> y) - (x <> z)
(y - z) <> x = (y <> x) - (z <> x)
Sadly, I cannot find resources that discussion a "subtraction monoid", but I assume it would need to obey the following additional law:
x - x = 0

Haskell: foldl' accumulator parameter

I've been asking a few questions about strictness, but I think I've missed the mark before. Hopefully this is more precise.
Lets say we have:
n = 1000000
f z = foldl' (\(x1, x2) y -> (x1 + y, y - x2)) z [1..n]
Without changing f, what should I set
z = ...
So that f z does not overflow the stack? (i.e. runs in constant space regardless of the size of n)
Its okay if the answer requires GHC extensions.
My first thought is to define:
g (a1, a2) = (!a1, !a2)
and then
z = g (0, 0)
But I don't think g is valid Haskell.
So your strict foldl' is only going to evaluate the result of your lambda at each step of the fold to Weak Head Normal Form, i.e. it is only strict in the outermost constructor. Thus the tuple will be evaluated, however those additions inside the tuple may build up as thunks. This in-depth answer actually seems to address your exact situation here.
W/R/T your g: You are thinking of BangPatterns extension, which would look like
g (!a1, !a2) = (a1, a2)
and which evaluates a1 and a2 to WHNF before returning them in the tuple.
What you want to be concerned about is not your initial accumulator, but rather your lambda expression. This would be a nice solution:
f z = foldl' (\(!x1, !x2) y -> (x1 + y, y - x2)) z [1..n]
EDIT: After noticing your other questions I see I didn't read this one very carefully. Your goal is to have "strict data" so to speak. Your other option, then, is to make a new tuple type that has strictness tags on its fields:
data Tuple a b = Tuple !a !b
Then when you pattern match on Tuple a b, a and b will be evaluated.
You'll need to change your function regardless.
There is nothing you can do without changing f. If f were overloaded in the type of the pair you could use strict pairs, but as it stands you're locked in to what f does. There's some small hope that the compiler (strictness analysis and transformations) can avoid the stack growth, but nothing you can count on.

Confusion about function composition in Haskell

Consider following function definition in ghci.
let myF = sin . cos . sum
where, . stands for composition of two function (right associative). This I can call
myF [3.14, 3.14]
and it gives me desired result. Apparently, it passes list [3.14, 3.14] to function 'sum' and its 'result' is passed to cos and so on and on. However, if I do this in interpreter
let myF y = sin . cos . sum y
or
let myF y = sin . cos (sum y)
then I run into troubles. Modifying this into following gives me desired result.
let myF y = sin . cos $ sum y
or
let myF y = sin . cos . sum $ y
The type of (.) suggests that there should not be a problem with following form since 'sum y' is also a function (isn't it? After-all everything is a function in Haskell?)
let myF y = sin . cos . sum y -- this should work?
What is more interesting that I can make it work with two (or many) arguments (think of passing list [3.14, 3.14] as two arguments x and y), I have to write the following
let (myF x) y = (sin . cos . (+ x)) y
myF 3.14 3.14 -- it works!
let myF = sin . cos . (+)
myF 3.14 3.14 -- -- Doesn't work!
There is some discussion on HaskellWiki regarding this form which they call 'PointFree' form http://www.haskell.org/haskellwiki/Pointfree . By reading this article, I am suspecting that this form is different from composition of two lambda expressions. I am getting confused when I try to draw a line separating both of these styles.
Let's look at the types. For sin and cos we have:
cos, sin :: Floating a => a -> a
For sum:
sum :: Num a => [a] -> a
Now, sum y turns that into a
sum y :: Num a => a
which is a value, not a function (you could name it a function with no arguments but this is very tricky and you also need to name () -> a functions - there was a discussion somewhere about this but I cannot find the link now - Conal spoke about it).
Anyway, trying cos . sum y won't work because . expects both sides to have types a -> b and b -> c (signature is (b -> c) -> (a -> b) -> (a -> c)) and sum y cannot be written in this style. That's why you need to include parentheses or $.
As for point-free style, the simples translation recipe is this:
take you function and move the last argument of function to the end of the expression separated by a function application. For example, in case of mysum x y = x + y we have y at the end but we cannot remove it right now. Instead, rewriting as mysum x y = (x +) y it works.
remove said argument. In our case mysum x = (x +)
repeat until you have no more arguments. Here mysum = (+)
(I chose a simple example, for more convoluted cases you'll have to use flip and others)
No, sum y is not a function. It's a number, just like sum [1, 2, 3] is. It therefore makes complete sense that you cannot use the function composition operator (.) with it.
Not everything in Haskell are functions.
The obligatory cryptic answer is this: (space) binds more tightly than .
Most whitespace in Haskell can be thought of as a very high-fixity $ (the "apply" function). w x . y z is basically the same as (w $ x) . (y $ z)
When you are first learning about $ and . you should also make sure you learn about (space) as well, and make sure you understand how the language semantics implicitly parenthesize things in ways that may not (at first blush) appear intuitive.

Resources