I'm just starting out with Lean. Can I define define the assumptions of a theorem?
For example, proving that for any integer pair min and max, every number x such that min <= x <= max, that min^2 <= x^2 <= max^2. I can define for all integers, but how can I only solve this when the values of min and max meet a constraint (min <= max)?
I would state this theorem like this
example (min max x : ℤ) (hxm : x ≤ max) (hmx : min ≤ x) : min^2 ≤ x^2 ∧ x^2 ≤ max^2
The fact that min ≤ max follows from the other two assumptions, so you needn't include it as an assumption.
Related
I proved some fairly trivial lemma
lemma two_ne_four_mul_any (n:ℕ) : 2 ≠ 2 * 2 * n
Obviously, the same should hold for non-negative Integers, Rationals, Reals, etc.:
lemma two_ne_four_mul_any (z:ℤ) (nonneg: 0 ≤ z): 2 ≠ 2 * 2 * z
In general, if we have p n for some n:nat we should be able to conclude 0 ≤ z → p' z where p' is the "same" as p.
However, I don't even see how to formulate this in Lean, let alone how it can be proven.
So, the question is, can this be proven in Lean, and how would one go about it?
can this be proven in Lean
If it's correct mathematics, it can be proven in Lean. You'll need to give the second lemma a different name from the first though.
import tactic
lemma two_ne_four_mul_any (n:ℕ) : 2 ≠ 2 * 2 * n := sorry
lemma two_ne_four_mul_any' (z:ℤ) (nonneg: 0 ≤ z) : 2 ≠ 2 * 2 * z :=
begin
-- get the natural
rcases int.eq_coe_of_zero_le nonneg with ⟨n, rfl⟩,
-- apply the lemma for naturals
apply_mod_cast (two_ne_four_mul_any n)
end
You have to be a bit careful here -- for example subtraction on naturals and integers can produce different results (e.g. 2-3=0 in the naturals and it's of course -1 in the integers, so if p n := n - 3 = 0 is a statement about naturals then p 2 is true but the naive "same" statement isn't true for integers). The cast tactics know what is true and what is not though.
Consider the language
L = { a3n + 5 | n ≥ 0 }
What is the minimum pumping length of L ?
By the minimum pumping length for a regular language L, I understand the smallest p such that every string u ∈ L of length at least p can be written u = xyz where |xy| ≤ p, y ≠ λ and xyiz ∈ L for all i ≥ 0. In your case, every string a3n + 5 ∈ L with n > 0 can be written:
a3n + 5 = a5(a3)ia3
where i ≥ 0. This decomposition satisfies the above conditions, so the minimum pumping length of L is 8. Note that n ≥ 0 does not work because the string a5 cannot be pumped. Note also that the minimal DFA for L has 8 states, although in general the minimum pumping length for a regular language can be less than the number of states in its minimal DFA.
The given question is: "What is the value of f 572 for the following definition of f?"
f :: Int -> Int
f n = g n (n+1)
g :: Int -> Int -> Int
g m i
| (mod i m) == 0 = i
| otherwise = g m (i+1)
To me this looks like a recursive function and the answer should be that the values keep adding up from 572 till 1044 (that's when mod 1044 572 will be 0).
It is a very inefficient way to calculate the double (2*) of a number. Because you feed g n (n+1).
g is given two numbers and as long as (mod i m) == 0 fails (i is not dividable by m) it will increment i. From the moment it succeeds, it returns i. Now the lowest n larger than k that is dividable by k is obviously 2*k.
So f is equivalent to:
-- equivalent to
f' = (2*)
In case negative numbers are also considered, it will always return 0, for the strictly negative numbers since the first such number to satisfy the modulo relation is 0. Finally if 0 is given, it will error. So when considering zero and negative numbers, the full definition is:
-- equivalent (with negative numbers and zero)
f' n | n > 0 = 2*n
| n < 0 = 0
-- n == 0 should error
Since the algorithm increments i each time, the program will run linear with n (given increment and modulo can be checked in constant time) so O(n). The equivalent definition runs of course in constant time (given multiplication and comparisons can be done in constant time, this is not the case for Integer for instance).
I'm learning Haskell and I have been practising doing some functions by myself, in this functions are included the calculus of sine using recursion, but I get strange results.
The formula I'm using to calculate the sine is this one:
And my code is this:
--Returns n to power p
pow :: Float->Integer->Float
pow n p =
if p == 0 then
1
else
if p == 1 then
n
else
n * (pow n (p-1))
--Finds a number's factorial
f :: Integer->Integer
f n =
if n == 1 then
n
else
n * (f (n-1))
--TODO: Trigonometric functions ( :v I'll do diz 2)
sinus :: Float->Char->Float
sinus n deg =
if(deg == 'd')then
sinusr 0 (normalize (torad n)) 0
else
sinusr 0 (normalize n) 0
--Get the value equivalent to radians of the presented degrees
torad :: Float->Float
torad v = ( (v * pi) / 180 )
--Recursive to get the value of the entering radians
sinusr :: Integer->Float->Float->Float
sinusr k x result =
if k == 130 then
result + ( ((pow (-1) k ) * ((pow x ((2*k)+1))) / (fromIntegral (f ((2*k)+1)))))
else
result + (sinusr (k+1) x ( ((pow (-1) k ) * ((pow x ((2*k)+1))) / (fromIntegral (f ((2*k)+1))))))
--Subtracts pi/2 the necessary times to get a value minor or equals to pi/2 :v
normalize :: Float->Float
normalize a = a - (fromIntegral (truncate (a / (pi*2)))*(pi*2))
For example, the output it's this:
*Main> sinus 1 'd'
1.7452406e-2
*Main> sinus 1 's'
0.84147096
*Main> sinus 2 's'
NaN
*Main> sinus 2 'd'
3.4899496e-2
Can someone tell me why it is showing me that?
I have worked the same logic with Lisp, and it runs perfectly, I just had to figure out the Haskell syntax, but as you can see, it is not working as it must be.
Beforehand, thank you very much.
Single point arithmetic isn't accurate enough for to calculate a trigonometric function. The exponent doesn't have enough bits for the large, intermediate numbers in sinusr. Or, to be blunt, the following number doesn't fit a Float:
ghci> 2 ^ 130 :: Float
Infinity
As soon as you hit the boundaries of floating point numbers (-Infinity, Infinity) you usually end up with either those or NaN.
Use Double instead. Your implementation of lisp probably uses double point precision floating point numbers too. Even better, don't recalculate the whole fraction in every step, instead update the nominator and denominator, then your values won't get too large for Float.
I started learning Haskell recently and in my class right now, we have constructed a Peano number class and instanced it in the Num typeclass.
During lecture, my professor claimed that depending on whether you viewed the successor function as S x = x + 1 or S x = 1 + x, the appropriate successor case for the multiplication definition would be different. Respectively:
x * S y = x * y + x
x * S y = x + x * y
Moreover, he claimed that using the first of these two choices is preferable because it is lazier but I'm having trouble seeing how this is the case.
We looked at the example in which the addition definition of
x + S y = S (x + y)
is better than
x + S y = S x + y
because evaluating x + y == z occurs much faster but I can't find an analogous case for multiplication.
The lecture notes are here: http://cmsc-16100.cs.uchicago.edu/2014/Lectures/lecture-02.php
Laziness is not about speed but what is available how soon.
With x * S y = x * y + x then you can answer infinity * 2 > 5 very quickly, because it will expand as so:
infinity * (S (S Z)) > 5
infinity * (S Z) + infinity > 5
infinity * Z + infinity + infinity > 5
infinity + infinity > 5
(from there the rest is trivial)
However, I don't think it is all as good as your professor claimed! Try to expand out 2 * infinity > 5 in this formalism and you'll be disappointed (or busy for a very long time :-P). On the other hand, with the other definition of multiplication, you do get an answer there.
Now, if we have the "good" definition of addition, I think it should be the case that you can get an answer with infinities in either position. And indeed, I checked the source of a few Haskell packages that define Nats, and indeed they prefer x * S y = x + x * y rather than the way your professor claimed was better.