turn proofs for nat into proofs for non-negative ints - lean

I proved some fairly trivial lemma
lemma two_ne_four_mul_any (n:ℕ) : 2 ≠ 2 * 2 * n
Obviously, the same should hold for non-negative Integers, Rationals, Reals, etc.:
lemma two_ne_four_mul_any (z:ℤ) (nonneg: 0 ≤ z): 2 ≠ 2 * 2 * z
In general, if we have p n for some n:nat we should be able to conclude 0 ≤ z → p' z where p' is the "same" as p.
However, I don't even see how to formulate this in Lean, let alone how it can be proven.
So, the question is, can this be proven in Lean, and how would one go about it?

can this be proven in Lean
If it's correct mathematics, it can be proven in Lean. You'll need to give the second lemma a different name from the first though.
import tactic
lemma two_ne_four_mul_any (n:ℕ) : 2 ≠ 2 * 2 * n := sorry
lemma two_ne_four_mul_any' (z:ℤ) (nonneg: 0 ≤ z) : 2 ≠ 2 * 2 * z :=
begin
-- get the natural
rcases int.eq_coe_of_zero_le nonneg with ⟨n, rfl⟩,
-- apply the lemma for naturals
apply_mod_cast (two_ne_four_mul_any n)
end
You have to be a bit careful here -- for example subtraction on naturals and integers can produce different results (e.g. 2-3=0 in the naturals and it's of course -1 in the integers, so if p n := n - 3 = 0 is a statement about naturals then p 2 is true but the naive "same" statement isn't true for integers). The cast tactics know what is true and what is not though.

Related

Write a recursive function to get the factorial of any given number in Haskell

I am new to Haskell Programming. I tried to write a Haskell function to get the factorial of any given number by recursion. But it gets stack overflow exception.
factor x = x * (factor x-1)
You are getting the error since your recursion doesn't have a base case. You need a way for the program to stop once a certain condition is met.
There are many ways of doing the factorial calculation - recursion, product, or folds.
factorial n = product [1..n] -- product algorithm
import Data.List
factorial n = foldl' (*) 1 [1..n] -- fold algorithm
In the case of recursion, we always have two things:
The base case
How to get from one case to another
Considering some factorials:
1! = 1
2! = 2 x 1 = 2 x 1!
3! = 3 x 2 x 1 = 3 x 2!
4! = 4 x 3 x 2 x 1 = 4 x 3!
The base case, the final case, is 1!, or 1. The way of getting from n! to (n-1)! is to multiply n by (n-1)!
When we set out the factorial calculation, we have to do the base case first, otherwise Haskell will pattern match on the general case and never get to the base case. Also, it is a good practice to explicitly specify the types, in this case Integer (arbitrary sized integer) to Integer (arbitrary sized integer).
factor :: Integer -> Integer
factor 1 = 1
factor n = n * factor (n-1)

Minimum pumping length of a regular language

Consider the language
L = { a3n + 5 | n ≥ 0 }
What is the minimum pumping length of L ?
By the minimum pumping length for a regular language L, I understand the smallest p such that every string u ∈ L of length at least p can be written u = xyz where |xy| ≤ p, y ≠ λ and xyiz ∈ L for all i ≥ 0. In your case, every string a3n + 5 ∈ L with n > 0 can be written:
a3n + 5 = a5(a3)ia3
where i ≥ 0. This decomposition satisfies the above conditions, so the minimum pumping length of L is 8. Note that n ≥ 0 does not work because the string a5 cannot be pumped. Note also that the minimal DFA for L has 8 states, although in general the minimum pumping length for a regular language can be less than the number of states in its minimal DFA.

Proof that a regular expression is not a regular language using pumping lemma

Ok, I know that this isn't a programming question but it is a computing question so it is relevant.
Basically, how can I use the pumping lemma to prove that this language is not regular?
{w in {0,1}* | if the length of w is odd then the middle symbol is 0}
Please answer this as simple as possible as whilst I know about models of computation, I am relatively new to it.
Thank you very much in advance!
According to the pumping lemma, if that language is regular then there must exist a number p such that for all strings longer than p in the language, we can decompose that string into x + y + z, where each of x, y, and z are strings and |y| >= 1, |x + y| <= p, and x + (y * i) + z is in the language for all non-negative integers i.
Now observe that for every non-negative integer i, the string "1" * i + "0" + "1" * i is in the language. (That is, the string of i 1s followed by a single 0 and then i more 1s)
Specifically, the string S consisting of p 1s followed by a 0 and then p more 1s is in the language. Since this string has length 2 p + 1, this string is long enough that it can be broken into three strings x, y, and z as in the pumping lemma. Since |x + y| <= p, it must be that x and y are all 1s, and the only 0 character in S is in z. Now consider the string S' = x + y + y + y + z. Since we added 2*|y| characters to it, S' must also have an odd length. But we added some number of 1 characters to the left of the only 0 in S, and didn't add any 1 characters to the right of the 0. So S' doesn't have a 0 as its middle character, and therefore S' isn't in the language.
Therefore, we've shown that the language can't be pumped as the pumping lemma requires. Therefore, the language is not regular.

Using Ogden’s Lemma versus regular Pumping Lemma for Context-Free Grammars

I'm learning the difference between the lemmata in the question. Every reference I can find uses the example:
{(a^i)(b^j)(c^k)(d^l) : i = 0 or j = k = l}
to show the difference between the two. I can find an example using the regular lemma to "disprove" it.
Select w = uvxyz, s.t. |vy| > 0, |vxy| <= p.
Suppose w contains an equal number of b's, c's, d's.
I selected:
u,v,x = ε
y = (the string of a's)
z = (the rest of the string w)
Pumping y will just add to the number of a's, and if |b|=|c|=|d| at first, it still will now.
(Similar argument for if w has no a's. Then just pump whatever you want.)
My question is, how does Ogden's lemma change this strategy? What does "marking" do?
Thanks!
One important stumbling issue here is that "being able to pump" does not imply context free, rather "not being able to pump" shows it is not context free. Similarly, being grey does not imply you're an elephant, but being an elephant does imply you're grey...
Grammar context free => Pumping Lemma is definitely satisfied
Grammar not context free => Pumping Lemma *may* be satisfied
Pumping Lemma satisfied => Grammar *may* be context free
Pumping Lemma not satisfied => Grammar definitely not context free
# (we can write exactly the same for Ogden's Lemma)
# Here "=>" should be read as implies
That is to say, in order to demonstrate that a language is not context free we must show it fails(!) to satisfy one of these lemmata. (Even if it satisfies both we haven't proved it is context free.)
Below is a sketch proof that L = { a^i b^j c^k d^l where i = 0 or j = k = l} is not context free (although it satisfies The Pumping Lemma, it doesn't satisfy Ogden's Lemma):
Pumping lemma for context free grammars:
If a language L is context-free, then there exists some integer p ≥ 1 such that any string s in L with |s| ≥ p (where p is a pumping length) can be written as
s = uvxyz
with substrings u, v, x, y and z, such that:
1. |vxy| ≤ p,
2. |vy| ≥ 1, and
3. u v^n x y^n z is in L for every natural number n.
In our example:
For any s in L (with |s|>=p):
If s contains as then choose v=a, x=epsilon, y=epsilon (and we have no contradiction to the language being context-free).
If s contains no as (w=b^j c^k d^l and one of j, k or l is non-zero, since |s|>=1) then choose v=b (if j>0, v=c elif k>0, else v=c), x=epsilon, y=epsilon (and we have no contradiction to the language being context-free).
(So unfortunately: using the Pumping Lemma we are unable to prove anything about L!
Note: the above was essentially the argument you gave in the question.)
Ogden's Lemma:
If a language L is context-free, then there exists some number p > 0 (where p may or may not be a pumping length) such that for any string w of length at least p in L and every way of "marking" p or more of the positions in w, w can be written as
w = uxyzv
with strings u, x, y, z, and v such that:
1. xz has at least one marked position,
2. xyz has at most p marked positions, and
3. u x^n y z^n v is in L for every n ≥ 0.
Note: this marking is the key part of Ogden's Lemma, it says: "not only can every element be "pumped", but it can be pumped using any p marked positions".
In our example:
Let w = a b^p c^p d^p and mark the positions of the bs (of which there are p, so w satisfies the requirements of Ogden's Lemma), and let u,x,y,z,v be a decomposition satisfying the conditions from Ogden's lemma (z=uxyzv).
If x or z contain multiple symbols, then u x^2 y z^2 w is not in L, because there will be symbols in the wrong order (consider (bc)^2 = bcbc).
Either x or z must contain a b (by Lemma condition 1.)
This leaves us with five cases to check (for i,j>0):
x=epsilon, z=b^i
x=a, z=b^i
x=b^i, z=c^j
x=b^i, z=d^j
x=b^i, z=epsilon
in every case (by comparing the number of bs, cs and ds) we can see that u x^2 v y^2 z is not in L (and we have a contradiction (!) to the language being context-free, that is, we've proved that L is not context free).
.
To summarise, L is not context-free, but this cannot be demonstrated using The Pumping Lemma (but can by Ogden's Lemma) and thus we can say that:
Ogden's lemma is a second, stronger pumping lemma for context-free languages.
I'm not too sure about how to use Ogden's lemma here but your "proof" is wrong. When using the pumping lemma to prove that a language is not context free you cannot choose the splitting into uvxyz. The splitting is chosen "for you" and you have to show that the lemma is not fulfilled for any uvxyz.

When is the difference between quotRem and divMod useful?

From the haskell report:
The quot, rem, div, and mod class
methods satisfy these laws if y is
non-zero:
(x `quot` y)*y + (x `rem` y) == x
(x `div` y)*y + (x `mod` y) == x
quot is integer division truncated
toward zero, while the result of div
is truncated toward negative infinity.
For example:
Prelude> (-12) `quot` 5
-2
Prelude> (-12) `div` 5
-3
What are some examples of where the difference between how the result is truncated matters?
Many languages have a "mod" or "%" operator that gives the remainder after division with truncation towards 0; for example C, C++, and Java, and probably C#, would say:
(-11)/5 = -2
(-11)%5 = -1
5*((-11)/5) + (-11)%5 = 5*(-2) + (-1) = -11.
Haskell's quot and rem are intended to imitate this behaviour. I can imagine compatibility with the output of some C program might be desirable in some contrived situation.
Haskell's div and mod, and subsequently Python's / and %, follow the convention of mathematicians (at least number-theorists) in always truncating down division (not towards 0 -- towards negative infinity) so that the remainder is always nonnegative. Thus in Python,
(-11)/5 = -3
(-11)%5 = 4
5*((-11)/5) + (-11)%5 = 5*(-3) + 4 = -11.
Haskell's div and mod follow this behaviour.
This is not exactly an answer to your question, but in GHC on x86, quotRem on Int will compile down to a single machine instruction, whereas divMod does quite a bit more work. So if you are in a speed-critical section and working on positive numbers only, quotRem is the way to go.
A simple example where it would matter is testing if an integer is even or odd.
let buggyOdd x = x `rem` 2 == 1
buggyOdd 1 // True
buggyOdd (-1) // False (wrong!)
let odd x = x `mod` 2 == 1
odd 1 // True
odd (-1) // True
Note, of course, you could avoid thinking about these issues by just defining odd in this way:
let odd x = x `rem` 2 /= 0
odd 1 // True
odd (-1) // True
In general, just remember that, for y > 0, x mod y always return something >= 0 while x rem y returns 0 or something of the same sign as x.

Resources