For example, how to proof in coq that:
or that:
You have many ways to state your lemmas and definitions, in particular it depends what your assumptions over the datatypes are. I recommend using the bigop library from the Mathematical Components Coq package. With it, you can prove your second lemma easily enough:
From mathcomp Require Import all_ssreflect all_algebra.
Section Avg.
Open Scope ring_scope.
Import GRing.Theory.
Variables (N : fieldType) (n : nat) (n_pos : n%:R != 0 :> N) (X : n.-tuple N).
Definition avg := (\sum_(x <- X) x) / n%:R.
Lemma avgP : \sum_(x <- X) (x - avg) = 0.
Proof.
rewrite sumrB !big_tuple sumr_const card_ord -mulr_natr divfK //.
by rewrite big_tuple subrr.
Qed.
End Avg.
Note that the above code is just meant as an example for you to get a feeling of a simple proof; properly developing a theory about statistic would require way more though and likely a different encoding of avg.
Related
I want to prove below goal.
n_n: ℕ
n_ih: n_n * (n_n + 1) / 2 = arith_sum n_n
⊢ (n_n + 1) * (n_n + 1 + 1) / 2 = n_n + 1 + n_n * (n_n + 1) / 2
ring, simp, linarith is not working.
Also I tried calc, but it too long.
Is there any automatic command to erase common constant in equation?
I would say that you were asking the wrong question. Your hypothesis and goal contain / but this is not mathematical division, this is a pathological function which computer scientists use, which takes as input two natural numbers and is forced to return a natural number, so often can't return the right answer. For example 5 / 2 = 2 with the division you're using. Computer scientists call it "division with remainder" and I call it "broken and should never be used". When I'm doing this sort of exercise with my class I always coerce everything to the rationals before I do the division, so the division is mathematical division rather than this pathological function which does not satisfy things like (a / b) * b = a. The fact that this division doesn't obey the rules of normal division is why you can't get the tactics to work. If you coerce everything to the rationals before doing the division then you won't get into this mess and ring will work fine.
If you do want to persevere down the natural division road then one approach would be to start doing things proving that n(n+1) is always even, so that you can deduce n(n+1)/2)*2=n(n+1). Alternatively you could avoid this by observing that to show A/2=B/2 it suffices to prove that A=B. But either way you'll have to do a few lines of manual fiddling because you're not using mathematical functions, you're using computer science approximations, so mathematical tactics don't work with them.
Here's what my approach looks like:
import algebra.big_operators
open_locale big_operators
open finset
def arith_sum (n : ℕ) := ∑ i in range n, (i : ℚ) -- answer is rational
example (n : ℕ) : arith_sum n = n*(n-1)/2 :=
begin
unfold arith_sum,
induction n with d hd,
{ simp },
{ rw [finset.sum_range_succ, hd, nat.succ_eq_add_one],
push_cast,
ring, -- works now
}
end
Total Julia noob here (with basic knowledge of Python). I am trying to do linear regression and things I read suggest the GLM package. Here is some sample code I found here:
using DataFrames, GLM
y = 1:10
df = DataFrame(y = y, x1 = y.^2, x2 = y.^3)
sm = GLM.lm( #formula(y ~ x1 + x2), df )
coef(sm)
Can someone explain the syntax here? What does #formula mean? Docs here say #foo means a
macro which I guess is basically just a function, but where do I find the function/macro formula? Just looking at the use here though, I would have thought it is maybe passing y ~ x1 + x2 (whatever that is) as the formula argument to lm? (similar to keyword arguments = in python?)
Next, what is ~ here? General docs say ~ means negation but I'm not seeing how that makes here.
Is there a place in the GLM docs where all of this is explained? I'm not seeing that. Only seeing a few examples but not a full breakdown of each function and all of its arguments.
You have stumbled upon the #formula language that is defined in the StatsModels.jl package and implemented in many statistics/econometrics related packages across the Julia ecosystem.
As you say, #formula is a macro, which transforms the expression given to it (here y ~ x1 + x2) into some other Julia expression. If you want to find out what happens when a macro gets called in Julia - which I admit can often look like magic to new (and sometimes experienced!) users - the #macroexpand macro can help you. In this case:
julia> #macroexpand #formula(y ~ x1 + x2)
:(StatsModels.Term(:y) ~ StatsModels.Term(:x1) + StatsModels.Term(:x2))
The result above is the expression constructed by the #formula macro. We see that the variables in our formula macro are transformed into StatsModels.Term objects. If we were to use StatsModels directly, we could construct this ourselves by doing:
julia> Term(:y) ~ Term(:x1) + Term(:x2)
FormulaTerm
Response:
y(unknown)
Predictors:
x1(unknown)
x2(unknown)
julia> (Term(:y) ~ Term(:x1) + Term(:x2)) == #formula(y ~ x1 + x2)
true
Now what is going on with ~, which as you say can be used for negation in Julia? What has happened here is that StatsModels has defined methods for ~ (which in Julia is and infix operator, that means essentially it is a function that can be written in between its arguments rather than having to be called with its arguments in brackets:
julia> (Term(:y) ~ Term(:x)) == ~(Term(:y), Term(:x))
true
So writing y::Term ~ x::Term is the same as calling ~(y::Term, x::Term), and this method for calling ~ with terms on the left and right hand side is defined by StatsModels (see method no. 6 below):
julia> methods(~)
# 6 methods for generic function "~":
[1] ~(x::BigInt) in Base.GMP at gmp.jl:542
[2] ~(::Missing) in Base at missing.jl:100
[3] ~(x::Bool) in Base at bool.jl:39
[4] ~(x::Union{Int128, Int16, Int32, Int64, Int8, UInt128, UInt16, UInt32, UInt64, UInt8}) in Base at int.jl:254
[5] ~(n::Integer) in Base at int.jl:138
[6] ~(lhs::Union{AbstractTerm, Tuple{Vararg{AbstractTerm,N}} where N}, rhs::Union{AbstractTerm, Tuple{Vararg{AbstractTerm,N}} where N}) in StatsModels at /home/nils/.julia/packages/StatsModels/pMxlJ/src/terms.jl:397
Note that you also find the general negation meaning here (method 3 above, which defines the behaviour for calling ~ on a boolean argument and is in Base Julia).
I agree that the GLM.jl docs maybe aren't the most comprehensive in the world, but one of the reasons for that is that the whole machinery behind #formula actually isn't a GLM.jl thing - so do check out the StatsModels docs linked above which are quite good I think.
Concatenative languages have some very intriguing characteristics, such as being able to compose functions of different arity and being able to factor out any section of a function. However, many people dismiss them because of their use of postfix notation and how it's tough to read. Plus the Polish probably don't appreciate people using their carefully crafted notation backwards.
So, is it possible to have prefix notation? If it is, what would the tradeoffs be?
I have an idea of how it could work, but I'm not experienced with concatenative languages so I'm probably missing something. Basically, a function would be evaluated in reverse order and values would be pulled from the stack in reverse order. To demonstrate this, I'll compare postfix to what prefix would look like. Here are some concatenative expressions with the traditional postfix notation.
5 dup * ! Multiply 5 by itself
3 2 - ! Subtract 2 from 3
(1, 2, 3, 4, 5) [2 >] filter length ! Get the number of integers from 1 to 5
! that are greater than 2
The expressions are evaluated from left to right: in the first example, 5 is pushed on the stack, then dup duplicates the top value on the stack, then * multiplies the top two values on the stack. Functions pull their last argument first from the stack: in the second example, when - is called, 2 is at the top of the stack, but it is the last argument.
Here is what I think prefix notation would look like:
* dup 5
- 3 2
length filter (1, 2, 3, 4, 5) [< 2]
The expressions are evaluated from right to left, and functions pull their first argument first from the stack. Note how the prefix filter example reads much more closely to its description and looks similar to the applicative style. One issue I noticed is factoring things out might not be as useful. For example, in postfix notation you can factor out 2 - from 3 2 - to create a subtractTwo function. In prefix notation you can factor out - 3 from - 3 2 to create a subtractFromThree function, which doesn't seem as useful.
Barring any glaring issues, perhaps a concatenative language that uses prefix notation could win over the people who dislike postfix notation. Any insight is appreciated.
Well certainly, if your words are still fixed-arity then it's just a matter of executing tokens right to left.
It's only because of n-arity functions that prefix notation implies parenthesis, and it's only because of wanting human "reading order" to match execution order that being a stack language implies postfix.
I'm writing such a language right now as it happens, and so far I like some of the side-effects of using prefix notation. The semantics are based on Joy:
Files are parsed from left to right, but executed from right to left.
By extension, definitions must come after the point at which they are used.
As a nice side-effect, comments are simply lists which are dropped.
Here's the factorial function, for instance:
def 'fact [cond [* fact - 1 dup] [1 drop] dup]
I also find it easier to reason about the code as I'm writing it, but I don't have a strong background in concatenative languages. Here's my (probably-naive) derivation of the map function over lists. The 'nb' function drops something and is used for comments. 'stash [f]' pops into a temp, runs 'f' on the rest of the stack, then pushes the temp back on.
def 'map [q [cons map stash [head swap i] dup stash [tail dup]] [nb] is_cons nip]
nb [map [f] (cons x y) -> cons map [f] x f y
stash [tail dup] [f] (cons x y) = [f] y (cons x y)
dup [f] y (cons x y) = [f] [f] y (cons x y)
stash [head swap i] [f] [f] y (cons x y) = [f] x (f y)
cons map [f] x (f y) = cons map [f] x f y
map [f] [] -> []]
I just came from reading about the Om Language
Seems just what you are talking about. From it's description (emphasis mine):
The Om language is:
a novel, maximally-simple concatenative, homoiconic programming and algorithm notation language with:
minimal syntax, comprised of only three elements.
prefix notation, in which functions manipulate the remainder of the program itself. [...]
It also states that it's not finished, and will experience much change yet.
Still, it seems to be working, and really interesting as proof of concept.
I imagine a concatenative prefix language without stack. It could call functions, which would then themselves interpret code until they got all needed operands. Interpreter would then call next function. It would only need one memory construct - the result. Everything else could be read from the source code at time of execution. As you might have noticed, I am talking about interpreted language, not compiled one.
I'm trying to create a modulus function within haskell using primtive recursive functions. I know it's possible (because it's on the list of example functions on wikipedia)
And I know how i'd logically do it too.. But I just can't implement it!
IE, the logic is (not primtive recursion or haskell)
function mod(a, b){
while(a > b)
a -= b
return a;
}
Which I can define using recursion (again not haskel)
function mod(a, b){
if(a < b) return a;
return mod(a - b, b);
}
But I just can't seem to implement it using primitive recursive functions. I bit which I can't do is the logic of a < b
I think to really solve my problem I need some sort of defined logic such as (again not haskel)
reduce(a, b)
= a >= b -> a-b
otherwise x
If anyone could help me with any part of this i'd really appreciate it, thanks
Edit::
I thought of potentially defining a modulus function making use of dividing, ie mod(a, b) = a - (a/b) * b, but since my primitive recursive function for divide relies on modulo I can't do it haha
Take a look at this for some pointers: http://www.proofwiki.org/wiki/Quotient_and_Remainder_are_Primitive_Recursive
Also note that the wikipedia definition is somewhat narrow. Any function built up by induction over a single finite data structure is primitive recursive, though it takes a bit to show that this translates into the tools given in wikipedia. And note that we can represent the naturals in the classic peano style. You don't have to do this of course, but it makes reasoning about induction much more natural. See the agda wiki for a citation of this notion of primitive recursion: http://wiki.portal.chalmers.se/agda/pmwiki.php?n=ReferenceManual.Totality#Primitiverecursion
The following page also has what I think is a somewhat clearer exposition of primitive recursion: http://plato.stanford.edu/entries/recursive-functions/#1.3
The solution to this is
mod(0, y)
= zero(y)
mod(x, 0)
= zero(x)
mod(x + 1, y)
= mult3(succ(mod(x, y)), sign(y), notsign(eq(mod(x, y), diff(y, 1))))
Greetings, StackOverflow.
Let's say I have two following recurrence relations for computing S(i,j)
I would like to compute values S(0,0), S(0,1), S(1,0), S(2,0), etc... in asymptotically optimal way. Few minutes with pencil and paper reveal that it unfolds into treelike structure which can be transversed in several ways. Now, it's unlikely tree will be useful later on, so for now I'm looking to produce nested list like [[S(00)],[S(10),S(01)],[S(20),S(21),S(12),S(02)],...]. I have created a function to produce a flat list of S(i,0) (or S(0,j), depending on first argument):
osrr xpa p predexp = os00 : os00 * (xpa + rp) : zipWith3 osrr' [1..] (tail osrr) osrr
where
osrr' n a b = xpa * a + rp * n * b
os00 = sqrt (pi/p) * predexp
rp = recip (2*p)
I am, however, at loss as how to proceed further.
I would suggest writing it in a direct recursive style and using memoization to create your traversal:
import qualified Data.MemoCombinators as Memo
osrr p = memoed
where
memoed = Memo.memo2 Memo.integral Memo.integral osrr'
osrr' a b = ... -- recursive calls to memoed (not osrr or osrr')
The library will create an infinite table to store values you have already computed. Because the memo constructors are under the p parameter, the table exists for the scope of p; i.e. osrr 1 2 3 will create a table for the purpose of computing A(2,3), and then clean it up. You can reuse the table for a specific p by partially applying:
osrr1 = osrr p
Now osrr1 will share the table between all its calls (which, depending on your situation, may or may not be what you want).
First, there must be some boundary conditions that you've not told us about.
Once you have those, try stating the solution as a recursively defined array. This works as long as you know an upper bound on i and j. Otherwise, use memo combinators.