Disclaimer: this is for a homework assignment
I'm a coq noob, so I hope this is not a repeat question. I /have/ looked at this question, but my question seems to be unanswered still.
I have the following premises:
P \/ Q
~Q
I need to prove:
P
My coq code so far:
Section Q5.
Variables Q : Prop.
Goal P.
Hypothesis premise1 : P \/ Q.
Hypothesis premise2 : ~Q.
I get the following error when I try to execute the line Goal P.:
Error: The reference P was not found in the current environment.
These are the solutions that I was able to come up with:
Replace Variables Q : Prop. with Variables P Q : Prop.. The problem with this is that P will be assumed as a premise, which it is not
Add Variables P. before the goal declaration. This results in a syntax error.
Am I missing something? I don't seem to be able to figure this out.
The proper solution is 1, and the problem you are expecting is wrong.
When you write:
Variable P: Prop.
You are not assuming that P is inhabited (or, that "P holds"), but only that there exists a proposition named P, a "statement" whose validity is not considered here.
This is very different from writing:
Variable p: P.
Which assumes that there is a proof "p" that the type "P" is inhabited (if P has type Prop, p is a proof of the proposition P), and thus assumes that P is true.
Also, the reason why:
Variables P.
results in a syntax error is that you need to provide a type for each variable introduced (Coq can't figure it out magically when there is no information leading the type inference engine).
So it is perfectly fine to begin your script as:
Section Q5.
Variables P Q : Prop.
Hypothesis premise1 : P \/ Q.
Hypothesis premise2 : ~Q.
Goal P.
Related
So this is related to regular languages,their quotient, and closure properties. I am suppose first answer if the question is true or false and then explain why. I dont know how to answer the following question.
1.)
Which of the following identities are true?
(L/a)a = L (the left side represents the concatenation of the languages L/a and {a}).
If they are not equal, this is most easily shown by a counterexample. Equality is usually shown by proving the inclusion in both directions.
Here, take for example L = {b}. Then L/a = L and thus (L/a)a = {ba}, which is not equal to L.
Are these two equivalent:
r: A -> B
r: A set -> set B
That is, is set the default multiplicity?
If yes, then I will quibble with the definition of the arrow operator in the Software Abstractions book. The book says on page 55:
The arrow product (or just product) p->q of two relations p and q is
the relation you get by taking every combination of a tuple from p and
a tuple from q and concatenating them.
I interpret that definition to mean the only valid instance for p->q is one that has every possible combination of tuples from p with tuples from q. But that's not right (I think). Any instance containing mappings between p and q is valid. For example, on page 56 is this example,
Name = {(N0), (N1)}
Addr = {(D0), (D1)}
The book says this is a valid relation for Name->Addr
{(N0, D0), (N0, D1), (N1, D0), (N1, D1)}
But that's not the only valid relation, right? For example, this is a valid relation:
{(N0, D0), (N1, D1)}
Is that right?
The declaration r : A->B means r is a subset of A->B. The expression A->B has just one value, which is the cross product of A and B. The declaration results in a set of possible values for r, which would include both the example given in the book that you cite, and the example that you ask about.
From wiki.haskell.org:
First of all, common subexpression elimination (CSE) means that if an expression appears in several places, the code is rearranged so that the value of that expression is computed only once. For example:
foo x = (bar x) * (bar x)
might be transformed into
foo x = let x' = bar x in x' * x'
thus, the bar function is only called once. (And if bar is a particularly expensive function, this might save quite a lot of work.)
GHC doesn't actually perform CSE as often as you might expect. The trouble is, performing CSE can affect the strictness/laziness of the program. So GHC does do CSE, but only in specific circumstances --- see the GHC manual. (Section??)
Long story short: "If you care about CSE, do it by hand."
I'm wondering under what circumstances CSE "affects" the strictness/laziness of the program and what kind of effect that could be.
The naive CSE rule would be
e'[e, e] ~> let x = e in e'[x, x].
That is, whenever a subexpression e occurs twice in the expression e', we use a let-binding to compute e once. This however leads itself to some trivial space leaks. For example
sum [1..n] + prod [1..n]
is typically O(1) space usage in a lazy functional programming language like Haskell (as sum and prod would tail-recurse and blah blah blah), but would become O(n) when the naive CSE rule is enacted. This can be terrible for programs when n is high!
The approach is then to make this rule more specific, restricting it to a small set of cases that we know won't have the problem. We can begin by more specifically enumerating the problems with the naive rule, which will form a set of priorities for us to develop a better CSE:
The two occurrences of e might be far apart in e', leading to a long lifetime for the let x = e binding.
The let-binding must always allocate a closure where previously there might not have been one.
This can create an unbound number of closures.
There are cases where the closure might never deallocate.
Something better
let x = e in e'[e] ~> let x = e in e'[x]
This is a more conservative rule but is much safer. Here we recognize that e appears twice but the first occurrence syntactically dominates the second expression, meaning here that the programmer has already introduced a let-binding. We can safely just reuse that let-binding and replace the second occurrence of e with x. No new closures are allocated.
Another example of syntactic domination:
case e of { x -> e'[e] } ~> case e of { x -> e'[x] }
And yet another:
case e of {
Constructor x0 x1 ... xn ->
e'[e]
}
~>
case e of {
Constructor x0 x1 ... xn ->
e'[Constructor x0 x1 ... xn]
}
These rules all take advantage of existing structure in the program to ensure that the kinetics of space usage remain the same before and after the transformation. They are much more conservative than the original CSE but they are also much safer.
See also
For a full discussion of CSE in a lazy FPL, read Chitil's (very accessible) 1997 paper. For a full treatment of how CSE works in a production compiler, see GHC's CSE.hs module, which is documented very thoroughly thanks to GHC's tradition of writing long footnotes. The comment-to-code ratio in that module is off the charts. Also note how old that file is (1993)!
I'm currently reading Implementing functional languages: a tutorial by SPJ and the (sub)chapter I'll be referring to in this question is 3.8.7 (page 136).
The first remark there is that a reader following the tutorial has not yet implemented C scheme compilation (that is, of expressions appearing in non-strict contexts) of ECase expressions.
The solution proposed is to transform a Core program so that ECase expressions simply never appear in non-strict contexts. Specifically, each such occurrence creates a new supercombinator with exactly one variable which body corresponds to the original ECase expression, and the occurrence itself is replaced with a call to that supercombinator.
Below I present a (slightly modified) example of such transformation from 1
t a b = Pack{2,1} ;
f x = Pack{2,2} (case t x 7 6 of
<1> -> 1;
<2> -> 2) Pack{1,0} ;
main = f 3
== transformed into ==>
t a b = Pack{2,1} ;
f x = Pack{2,2} ($Case1 (t x 7 6)) Pack{1,0} ;
$Case1 x = case x of
<1> -> 1;
<2> -> 2 ;
main = f 3
I implemented this solution and it works like charm, that is, the output is Pack{2,2} 2 Pack{1,0}.
However, what I don't understand is - why all that trouble? I hope it's not just me, but the first thought I had of solving the problem was to just implement compilation of ECase expressions in C scheme. And I did it by mimicking the rule for compilation in E scheme (page 134 in 1 but I present that rule here for completeness): so I used
E[[case e of alts]] p = E[[e]] p ++ [Casejump D[[alts]] p]
and wrote
C[[case e of alts]] p = C[[e]] p ++ [Eval] ++ [Casejump D[[alts]] p]
I added [Eval] because Casejump needs an argument on top of the stack in weak head normal form (WHNF) and C scheme doesn't guarantee that, as opposed to E scheme.
But then the output changes to enigmatic: Pack{2,2} 2 6.
The same applies when I use the same rule as for E scheme, i.e.
C[[case e of alts]] p = E[[e]] p ++ [Casejump D[[alts]] p]
So I guess that my "obvious" solution is inherently wrong - and I can see that from outputs. But I'm having trouble stating formal arguments as to why that approach was bound to fail.
Can someone provide me with such argument/proof or some intuition as to why the naive approach doesn't work?
The purpose of the C scheme is to not perform any computation, but just delay everything until an EVAL happens (which it might or might not). What are you doing in your proposed code generation for case? You're calling EVAL! And the whole purpose of C is to not call EVAL on anything, so you've now evaluated something prematurely.
The only way you could generate code directly for case in the C scheme would be to add some new instruction to perform the case analysis once it's evaluated.
But we (Thomas Johnsson and I) decided it was simpler to just lift out such expressions. The exact historical details are lost in time though. :)
I am implementing an impure untyped lambda-calculus interpreter in Haskell.
I'm presently stuck on implementing "alpha-congruence" (also called "alpha-equivalence" or "alpha-equality" in some textbooks). I want to be able to check whether two lambda-expressions are equal or not equal to each other. For example, if I enter the following expression into the interpreter it should yield True (\ is used to indicate the lambda symbol):
>\x.x == \y.y
True
The problem is understanding whether the following lambda-expressions are considered alpha-equivalent or not:
>\x.xy == \y.yx
???
>\x.yxy == \z.wzw
???
In the case of \x.xy == \y.yx I would guess that the answer is True. This is because \x.xy => \z.zy and \y.yx => \z.zy and the right-hand sides of both are equal (where the symbol => is used to denote alpha-reduction).
In the cae of \x.yxy == \z.wzw I would likewise guess that the answer is True. This is because \x.yxy => \a.yay and \z.wzw => \a.waw which (I think) are equal.
The trouble is that all of my textbooks' definitions state that only the names of the bound variables need to be changed for two lambda-expressions to be considered equal. It says nothing about the free variables in an expression needing to be renamed uniformly also. So even though y and w are both in their correct places in the lambda-expressions, how would the program "know" that the first y represents the first w and the second y represents the second w. I would need to be consistent about this in an implementation.
In short, how would I go about implementing an error-free version of a function isAlphaCongruent? What are the exact rules that I need to follow in order for this to work?
I would prefer to do this without using de Bruijn indices.
You are misunderstanding: different free variables are not alpha equivalent. So y /= x, and \w.wy /= \w.wx, and \x.xy /= \y.yx. Similarly, \x.yxy /= \z.wzw because y /= w.
Your book says nothing about free variables being allowed to be uniformly renamed because they are not allowed to be uniformly renamed.
(Think of it this way: if I haven't yet told you the definition of not and id, would you expect \x. not x and \x. id x to be equivalent? I sure hope not!)