I am working with two binary relations: g_o and pw_o, and I've defined pw_o below:
constants {A : Type} (g_o : A → A → Prop)
def pw_o (x y : A) : Prop := ∀ w : A, (g_o w x → g_o w y) ∧ (g_o y w → g_o x w)
I need to prove the following:
theorem prelim: ∀ x y z : A, g_o x y ∧ pw_o y z → g_o x z :=
I start with these tactics:
begin
intros,
cases a with h1 h2,
end
And I have this:
x y z : A,
h1 : g_o x y,
h2 : pw_o y z
⊢ g_o x z
Since pw_o is defined with a universal quantifier, I'd like to substitute w with x, then I would have (g_o x y → g_o x z) ∧ (g_o z x → g_o y x). After isolating the first conjunct with the "cases" tactic, I can use modus ponens on that first conjunct and h1.
How can I tell Lean to replace w in the definition of pw_o with x and replace x and y in the definition of pw_o with y and z, respectively?
Elimination of the universal quantifier is basically just application,
so to substitute w, with x just apply x to the instance h2 of pw_o y z.
theorem prelim': ∀ x y z : A, g_o x y ∧ pw_o y z → g_o x z :=
begin
intros,
cases a with h1 h2,
cases h2 x with h3 _,
sorry
end
Related
I am studying Haskell and I am learning what is an abstraction, substitution (beta equivalence), application, free and bound variables (alpha equivalence), but I have some doubts resolving these exercises, I don't know if my solutions are correct.
Make the following substitutions
1. (λ x → y x x) [x:= f z]
Sol. (\x -> y x x) =>α (\w -> y w w) =>α (\w -> x w w) =>β (\w -> f z w w)
2. ((λ x → y x x) x) [y:= x]
Sol. ((\x -> y x x)x) =>α (\w -> y w w)[y:= x] = (\w -> x w w)
3. ((λ x → y x) (λ y → y x) y) [x:= f y]
Sol. aproximation, i don't know how to do it: ((\x -> y x)(\y -> y x) y) =>β
(\x -> y x)y x)[x:= f y] =>β y x [x:= f y] = y f y
4. ((λ x → λ y → y x x) y) [y:= f z]
Sol aproximation, ((\x -> (\y -> (y x x))) y) =>β ((\y -> (y x x)) y) =>α ((\y -> (y x x)) f z)
Another doubt that I have is if can I run these expressions on this website? It is a Lambda Calculus Calculator but I do not know how to run these tests.
1. (λ x → y x x) [x:= f z]
Sol. (\x -> y x x) =>α (\w -> y w w) =>α (\w -> x w w) =>β (\w -> f z w w)
No, you can't rename y, it's free in (λ x → y x x). Only bound variables can be (consistently) α-renamed. But only free variables can be substituted, and there's no free x in that lambda term.
2. ((λ x → y x x) x) [y:= x]
Sol. ((\x -> y x x)x) =>α (\w -> y w w)[y:= x] = (\w -> x w w)
Yes, substituting x for y would allow it to be captured by the λ x, so you indeed must α-rename the x in (λ x → y x x) first to some new unique name as you did, but you've dropped the application to the free x for some reason. You can't just omit parts of a term, so it's ((\w -> y w w) x)[y:= x]. Now perform the substitution. Note you're not asked to perform the β-reduction of the resulting term, just the substitution.
I'll leave the other two out. Just follow the rules carefully. It's easy if you rename all bound names to unique names first, even if the renaming is not strictly required, for instance
((λ x → y x) (λ y → y x) y) [x:= f y] -->
((λ w → y w) (λ z → z x) y) [x:= f y]
The "unique" part includes also the free variables used in the substitution terms, that might get captured after being substituted otherwise (i.e. without the renaming being performed first, in the terms in which they are being substituted). That's why we had to rename the bound y in the above term, -- because y appears free in the substitution term. We didn't have to rename the bound x but it made it easier that way.
Let us look at the example of some lemma (whose statement and whether it is true or not is irrelevant for this discussion):
lemma L1 : forall (n m: ℕ) (p : ℕ → Prop), (p n ∧ ∃ (u:ℕ), p u ∧ p m) ∨ (¬p n ∧ p m) → n = m :=
begin
intros n m p H, cases H with H H,
{cases H with H1 H2, cases H2 with u H2, cases H2 with H2 H3, sorry},
{cases H with H1 H2, sorry}
end
The point I wish to highlight here is when destructing my hypothesis with the cases tactic,
I did not know any other way but to use the tactic several times (once for each 'layer' so to speak).
If I look at the same lemma in Coq:
Lemma L1 : forall (n m:nat) (p:nat -> Prop),
(p n /\ exists (u:nat), p u /\ p m) \/ (~p n /\ p m) -> n = m.
Proof.
intros n m p [[H1 [u [H2 H3]]]|[H1 H2]].
- admit.
-
Show.
I am able to destruct my assumption with a single nested pattern match.
I am guessing I can do the same sort of thing in Lean but I do not know how. I would be grateful to be told as I find the nested pattern match very convenient in practice.
You'll need mathlib for this, and import tactic.rcases. You can use the rcases tactic.
import tactic.rcases
lemma L1 : forall (n m: ℕ) (p : ℕ → Prop), (p n ∧ ∃ (u:ℕ), p u ∧ p m) ∨ (¬p n ∧ p m) → n = m :=
begin
intros n m p H,
rcases H with ⟨H1, u, H2, H3⟩ | ⟨H1, H2⟩,
end
I have been trying to write a code which takes all the integers in a tree and return a sum of them. I'm trying to do this with type a, which is from a data time:
data Tree a = Nil | Value a (Tree a) (Tree a)
deriving Show
and we want to use:
tree = Value 2 (Value 2 (Value 2 Nil Nil) Nil) (Value 2 Nil Nil)
and my code is as follow:
countTree :: (a -> a -> a) -> a -> Tree a -> a
countTree p k (Nil) = h
countTree p k (Value x y z) = x (+) (countTree p k y) (+) (countTree p k z)
and I want to run my code as countTree (+) 0 tree and the results should return 8.
The problem is that when I run my code it tells me that x has four arguments but it's type a has zero which I honestly don't understand why. I've modifying sections of my code, but no success once so ever, I could really use some assistance.
x (+) (countTree p k y) (+) (countTree p k z)
is attempting to treat x as a function, and pass to it as arguments all of
(+) (countTree p k y) (+) (countTree p k z)
If you want to have "x + recur left + recur right", you'd want something like:
x + (countTree p k y) + (countTree p k z)
I'm pretty sure however you actually want to use p, not + hard coded. Using prefix notation, you'd have to rearrange it a bit to something like :
(p (p x (countTree p k y)) (countTree p k z))
Or, you could use backticks to inline the calls to p as #bipll suggested:
x `p` (countTree p k y) `p` (countTree p k z)
A side note, but I'm also pretty sure you want h to be k.
In programming language J, is a train of verbs always associative? If it is, Are there any proofs?
No, a train of verbs is not associative and this follows the definitions. For example, a fork is
(f g h) y = (f y) g (h y)
but
(f (g h)) y = y f ((g h) y) = y f (y g (h y))
which can also be written as y f y g h y. And
((f g) h) y = y (f g) (h y) = y f (g (h y))
which can also be written as y f g h y.
Those three are completely different things.
Train in J is right associative, and the minimum group is a fork. Only when it cannot make a fork, it makes a hook. So
vvvvv = (vv(vvv)),
And
vvvv= (v(vvv)).
How do I do this haskell in F# cleanly?
add 1 2 x = 3 + x
add 1 x y = 1 + x + y
add z x y = z + x + y
You can't overload the function itself, but you can use pattern matching directly:
let add z x y = // curried multiple parameters
match z, x, y with // convert to three-tuple to match on
| 1, 2, x -> 3 + x
| 1, x, y -> 1 + x + y
| z, x, y -> z + x + y
Usage is as expected: add 1 2 3
If you're willing to use tuples as arguments (ie forgo currying and partial application), you can even write it more shorthand:
let add = // expect three-tuple as first (and only) parameter
function // use that one value directly to match on
| 1, 2, x -> 3 + x
| 1, x, y -> 1 + x + y
| z, x, y -> z + x + y
Usage now is: add (1, 2, 3)
Recall in Haskell that the general form of functions as a list of declarations with patterns:
f pat1 ... = e1
f pat2 ... = e2
f pat3 ... = e3
is just sugar for the case analysis:
f x1 .. xn = case (x1, .. xn) of
(pat1, ..., patn) -> e1
(pat2, ..., patn) -> e2
(pat3, ..., patn) -> e3
so the same translation can be made to other languages with pattern matching but without declaration-level patterns.
This is purely syntactic. Languages like Haskell, Standard ML and Mathematica allow you to write out different match cases as if they were different functions:
factorial 0 = 1
factorial 1 = 1
factorial n = n * factorial(n-1)
whereas languages like OCaml and F# require you to have a single function definition and use match or equivalent in its body:
let factorial = function
| 0 -> 1
| 1 -> 1
| n -> n * factorial(n-1)
Note that you don't have to copy the function name over and over again using this syntax and you can factor match cases more easily:
let factorial = function
| 0 | 1 -> 1
| n -> n * factorial(n-1)
As yamen wrote, do currying with let f a b = match a, b with ... in F#.
In the classic red-black tree implementation, I find the duplication of the function names and right-hand sides in Standard ML and Haskell quite ugly:
balance :: RB a -> a -> RB a -> RB a
balance (T R a x b) y (T R c z d) = T R (T B a x b) y (T B c z d)
balance (T R (T R a x b) y c) z d = T R (T B a x b) y (T B c z d)
balance (T R a x (T R b y c)) z d = T R (T B a x b) y (T B c z d)
balance a x (T R b y (T R c z d)) = T R (T B a x b) y (T B c z d)
balance a x (T R (T R b y c) z d) = T R (T B a x b) y (T B c z d)
balance a x b = T B a x b
compared to the equivalent OCaml or F#:
let balance = function
| B, z, (T(R, y, T(R, x, a, b), c) | T(R, x, a, T(R, y, b, c))), d
| B, x, a, (T(R, z, T(R, y, b, c), d) | T(R, y, b, T(R, z, c, d))) ->
T(R, y, T(B, x, a, b), T(B, z, c, d))
| a, b, c, d -> T(a, b, c, d)