How do I express that a proposition is independent in TLA+? - tla+

Let's say the following is incorrect
THEOREM Spec => []P \* Not correct!
That is there is some behavior satisfying Spec that does not exhibit []P. How can I express this in TLA+?
If I use simple negation, I end up with
THEOREM Spec /\ <>~P \* Also not correct!
However, it's possible that this theorem is also not correct! In particular, even though there is some behavior satisfying Spec that does not exhibit []P, there is also possibly some behavior that does, and that behavior would disprove this new theorem.
Is there some way of expressing the idea "this theorem is incorrect for some behaviors," i.e. quantification over behaviors?
EDIT: After thinking a bit about the precise nature of what I'm asking for, I'm really asking is there a way of annotating []P is independent of Spec?

We are in classical logic, such that []P \/ ~[]P is valid. You restated the second part, such that we have []P \/ <>~P. So for any P, one of the two will be true.

The usual way to do this is to write THEOREM ~(Spec => []P), make []P a temporal property, and tell people that the spec is correct if the property fails.

Provided that the properties be parametrized as operators that take arguments, yes it is possible to express that there exists a behavior that satisfies Spec(x) but violates []P(x). Using existential temporal quantification (\EE), this is expressible as follows
Spec(x) == ... (* a suitable definition here *)
P(x) == ... (* a suitable definition here *)
THEOREM \EE x: Spec(x) /\ ~ []P(x)
where we assume that no variables appear in the definitions of the operators Spec(x) and P(x).
We could also express that there exist both a behavior that satisfies Spec(x) /\ []P(x), and a behavior that satisfies Spec(x) /\ ~ []P(x), as follows
THEOREM /\ \EE x: Spec(x) /\ []P(x)
/\ \EE x: Spec(x) /\ ~ []P(x)

Related

Alloy assertion on implies command

I try to implement on Alloy the axiomatic system described in a paper on mereology: "Bennett, Having a Part Twice Over, 2013".
I implemented all the axioms, and I thought that if I implemented them correctly, I could assert and check the theorems.
I try to code theorem (T9). This is the theorem in the paper:
And this is how I coded it:
/* (T9) Conditional Reflexivity */
assert conditional_reflexivity {
all x: Filler | some z: Slot | z in x.slots implies x in x.parts
}
(Ps(z, x) means that z is a slot of x, and P(x, x) that x is a part of x. I coded both slots and parts as sets.)
However, when I check the assertion, something doesn’t seem to work. I got the following counterexample:
But I do not understand how this is a counterexample to an implication. The premise is not even fulfilled. The only way it makes sense is that it requires that there is a z for every x. In this case, this is for sure a counterexample. In this case, how I can check the theorem?
(I can share the full code if needed.)
As explained by Hovercouch, it was a precedence issue :
you got AE(p impl q) when you wanted A((Ep) impl q)
Adding parentheses fixed the issue.

Defining a predicate without specifying its truth condition in Coq

I'm trying to use Coq for some simple kinds of philosophical predicate logic. Suppose, for instance, that I wanted to express the statement "if a being is human, it is not perfect" in Coq. I will first have to define what the terms 'being', 'human', and 'perfect' are. The natural approach, it seems, is to define the first as a type, and others as unary predicates of that type.
Inductive being : Type :=
| b : nat -> being.
Inductive human : being -> Prop :=
| h : forall ( b : being ), human b.
Inductive perfect : being -> Prop :=
| p : forall ( b : being ), perfect b.
(being is indexed with numbers to ensure there are multiple distinct beings.)
With this definition, the statement can be expressed as
Lemma humans_are_imperfect :
forall ( b : being ), human b -> ~ perfect b.
Admitted.
The problem with this approach is that it allows proofs of nonsense, such as
Lemma humans_are_perfect :
forall ( b : being ), human b -> perfect b.
intros b H. apply ( p b ). Qed.
Obviously, there is a problem with the definition of human and perfect, because it indescriminately takes any being and asserts its humanity and perfection. What is needed is a definition of human and perfect that only specifies their types, while remaining ambiguous about which beings they apply to.
This problem could be avoided altogether if the truth condition of perfect was built into its definition, e.g. the constructor only took nonhuman beings as arguments. But supplying all information upfront like that is not always feasible in philosophical arguments. Very often you have to take some type or predicate as a given, and slowly build up its details with further premises.
I have a feeling that what I'm trying to do does not fit very well into the inductive basis of Coq. Maybe I'm better off learning a logic programming language like Prolog, but hopefully I'm wrong.
What you want to do can be perfectly expressed in Coq using just higher-order predicate logic, without the need for inductive definitions. You can work with your theorems just by postulating the existence of a type being, predicates human and perfect, and inference rules relating those.
Section SomePhilosophy.
Variable being : Type.
Variable human perfect : being -> Prop.
Hypothesis human_not_perfect : forall b, human b -> ~ perfect b.
(* ... some theorems about the above notions ... *)
End SomePhilosophy.
After closing the section, all of your definitions will become parametric with respect to choices of being, human and perfect that satisfy your hypothesis. In your original approach, you constrained your theory by fixing upfront what being is, since you gave its definition. Here, being can by anything.

What is the difference between "Qed" and "Defined"?

In the interactive theorem prover Coq, any interactive proof or definition can be terminated with either Qed or Defined. There is some concept of "opacity" which Qed enforces but Defined does not. For instance, the book Certified Programming with Dependent Types, by Adam Chlipala, states:
We end the "proof" with Defined instead of Qed, so that the definition we constructed remains visible. This contrasts to the case of ending a proof with Qed, where the details of the proof are hidden afterward. (More formally, Defined marks an identifier as transparent, allowing it to be unfolded; while Qed marks an identifier as opaque, preventing unfolding.)
However, I'm not quite sure what this means in practice. There is a later example in which it is necessary to use Defined due to the need for Fix to inspect the structure of a certain proof, but I don't understand exactly what this "inspection" entails, or why it would fail if Qed were used instead. (Looking at the definition of Fix wasn't exactly enlightening either).
Superficially, it's hard to tell what Qed is actually doing. For instance, if I write:
Definition x : bool.
exact false.
Qed.
I can still see the value of x by executing the command Print x. In addition, I'm allowed later to pattern-match on the "opaque" value of x:
Definition not_x : bool :=
match x with
| true => false
| false => true
end.
Therefore it seems like I'm able to use the value of x just fine. What does Prof. Chlipala mean by "unfolding" here? What exactly is the difference between an opaque and a transparent difference? Most importantly, what is special about Fix that makes this matter?
You are not really able to use the value of x, but only its type. For example, since x is false, try to prove that x = false or that x = true, and you won't be able to. You can unfold the definition of not_x (its definition is the same as that of x, but using Defined), but you won't be able to inspect the value of x, you only know that it is a boolean.
Lemma not_x_is_true : not_x = true.
Proof.
unfold not_x. (* this one is fine *)
unfold x. (* This one is not. Error: Cannot coerce x to an evaluable reference. *)
The idea behind Qed vs Defined is that in some cases, you don't want to look at the content of proof term (because it is not relevant, or just a really huge term you don't want to unfold), and all you need to know is that the statement is true, not why it is true. In the end, the question you have to ask before using Qed or Defined is: Do I need to know why one theorem is true, or do I only need to know that it is true?

What are inductive predicates?

How would you explain inductive predicates? What are they used for? What's the theory behind them? Are they only present in dependent type systems, or in other systems as well? Are they related to GADT's in some way? Why are they true by default in Coq?
This is an example from Coq:
Inductive even : nat -> Prop :=
| even0 : even 0
| evens : forall p:nat, even p -> even (S (S P))
How would you use this definition? Is it a datatype or a proposition?
I think we call inductive predicates objects that are defined inductively and sorted in Prop.
They are used for defining properties inductively (duh). The theory behind can probably be found in the literature for inductive constructions. Search papers about CIC (the Calculus of Inductive Constructions) for instance.
They are somewhat related to GADTs, though with dependent types they can express more things. If I am not mistaken, with GADTs each constructor lives in one particular family, whereas the addition of dependent types allows constructors to inhabit different families based on their arguments.
I do not know what you mean by "true by default in Coq".
even as you define it is an inductive datatype. It is not a proposition yet, as the type nat -> Prop indicates. A proposition would be even 0 or even 1. It can be inhabited (provable) as in even 0, or not as in even 1.
To answer another part of the question. Inductive predicates/definitions are not only present in dependent type systems (e.g., Isabelle/HOL also has them). In principle they are much older and just describe the least predicate (or set) that is closed under some given inference rules. Whether this is well-defined or not depends on the rules. There are sufficient conditions which are easy to check (e.g., that the defined predicate itself only occurs positively in premises of rules). Then the Knaster-Tarski theorem yields the desired result.
If I remember correctly the book The Formal Semantics of Programming Languages: An Introduction by Glynn Winskel gives an introduction to the mathematical foundations.

How is (==) defined in Haskell?

I'm writing a small functional programming language in Haskell, but I can't find a definition of how (==) is implemented, as this seems to be quite tricky?
Haskell uses the concept of a "typeclass". The actual definition is something like this:
class Eq a where
(==) :: a -> a -> Bool
-- More functions follow, for more complex concepts of equality (eg NaN)
Then you can define it for your own types. For example:
-- Eq can't be automatically derived, because of the function
data Foo = Foo Int (Char -> Bool)
-- So define it here
instance Eq Foo where
(Foo x _) == (Foo y _) = x == y
I think Your question is very, very interesting. If You have meant also that You want to know the theoretical roots behind Your question, then, I think, we can abstract away from Haskell and investigate Your question in more general algorithm concepts. As for Haskell, I think, the two following facts matter:
Functions are first-class citizens in Haskell
Haskell is Turing-complete
but I have not done the discussion yet (how the strength of a language matters exactly here).
Possibility for specific cases, but no-go theorem for comprehensive
I think, in the roots, two theorems of computer science provide the answer. If we want to abstract away from technical details, we can investigate Your question in lambda calculus (or in combinatory logic). Can equality be defined in them? Thus, let us restrict ourselves first to the field of lambda-calculus or combinatory logic.
It must be noted that both of these algorithm approaches are very minimalistic. There are no ``predefined'' datatypes in them, not even numbers, nor booleans, nor lists. But You can mimic all of them, in clever ways.
Instead of booleans, You can use projection (selector) functions (Church booleans).
Instead of C-unions (or C++ class inheritance), You can use continuations. More precisely said, it is case analysis that You can implement in a concise and straightforward way.
You can mimic natural numbers with functions that iterate function composition (Church numeral).
You can implement lists and trees with sophisticated algebraic methods (catamorphisms).
Thus, You can mimic all meaningful datatypes even in such minimalistic "functional languages" like lambda calculus and combinatory logic. You can use lambda functions (or combinators) in a clever scene that mimic the datatype You want.
Now let us try to reply Your questions in these minimalistic functional languages first, to see whether the answer is Haskell-specific, or rather the mere consequence of some more general theorems.
Böhm's theorem provides: for any two previously given different expressions (that halt, and don't freeze the computer), a suitable testing function can be always written that decides correctly whether the two given expressions are semantically the same (Csörnyei 2007: 132, = Th 7.2.2). In most practical cases (lists, trees, booleans, numbers), Böhm's theorem provides that a suitable specific equality function can always be written. See an example for lists in Tromp 1999: Sec 2.
Scott-Curry undecidability theorem excludes that any all-general equality function could be ever written, meaningful for each possible scene (Csörnyei 2007: 140, = Th 7.4.1).
A go-theorem
After You have "implemented" a datatype, you can write its corresponding equality function for that. For most practical cases (lists, numbers, case analysis selections), there is no mystical "datatype" for which a corresponding equality function would lack. This positive answer is provided by Böhm's theorem.
You can write a Church-numeral-equality function that takes two Church numerals, and answers whether they equal. You can write another lambda-function/combinator that takes two (Church)-booleans and answers whether they equal or not. Moreover, You can implement lists in pure lambda calculus/CL (a proposed way is to use the notion of catamorphisms), and then, You can define a function that answers equality for lists of booleans. You can write another function that answers equality for lists of Church numerals. You can implement trees too, and thereafter, You can write a function that answers equality for trees (on booleans, and another, on Church numerals).
You can automatize some of this job, but not all. You can derive automatically some (but not all) equality functions automatically. If You have already the specific map functions for trees and lists, and equality functions for booleans and numbers, then You can derive equality functions also for boolean trees, boolean lists, number lists, number trees automatically.
A no-go theorem
But there is no way to define a single all-automatic equality function working for all possible ,"datatypes". If You "implement" a concrete, given datatype in lambda calculus, You usually have to plan its specific equality function for that scene.
Moreover, there is no way to define a lambda-function that would take two lambda-terms and answer, whether the two lambda-terms would behave the same way when reduced. Even more, there is no way to define a lambda-function that would take the representation (quotation) of two lambda-terms and answer, whether the two original lambda-terms would behave the same way when reduced (Csörnyei 2007: 141, Conseq 7.4.3). That no-go answer is provided by Scott-Curry undecidability theorem (Csörnyei 2007: 140, Th 7.4.1).
In other algorithm approaches
I think, the two above answers are not restricted to lambda calculus and combinatory logic. The similar possibility and restriction applies for some other algorithm concepts. For example, there is no recursive function that would take the Gödel numbers of two unary functions and decide whether these encoded functions behave the same extensionally (Monk 1976: 84, = Cor 5.18). This is a consequence of Rice's theorem (Monk 1976: 84, = Th 5.17). I feel, Rice's theorem sounds formally very similar to Scott-Curry undecidability theorem, but I have not considered it yet.
Comprehensive equality in a very restricted sense
If I wanted to write a combinatory logic interpreter that provides comprehensive equality testing (restricted for halting, normal-form-having terms), then I would implement that so:
I'd reduce both combinatory logic-terms under consideration to their normal forms,
and see whether they are identical as terms.
If so, then their unreduced original forms must have been equivalent semantically too.
But this works only with serious restrictions, although this method works well for several practical goals. We can make operations among numbers, lists, trees etc, and check whether we get the expected result. My quine (written in pure combinatory logic) uses this restricted concept of equality, and it suffices, despite of the fact this quine requires very sophisticated constructs (term trees implemented in combinatory logic itself).
I am yet unknowing what the limits of this restricted equality-concept are, but I suspect, it is very restricted, if compared to the correct definition of equality., The motivation behind it use is that it is computable at all, unlike the unrestricted concept of equality.
The restrictions can be seen also from the fact that this restricted equality concept is able to work only for combinators that have normal forms. For a counterexample: the restricted equality-concept cannot check whether I Ω = Ω, although we know well that the two terms can be converted mutually into each other.
I must consider yet, how the existence of this restricted concept of equality is related to the negative results claimed by Scott-Curry undecidability theorem and Rice's theorem. Both theorems deal with partial functions, but I do not know yet how this matters exactly.
Extensionality
But there are also further limitations of the restricted equality concept. It cannot deal with the concept of extensionality. For example, it does not notice that S K would be related to K I in any way, despite of the fact that S K behaves the same as K I when they are applied to at least two arguments:
The latter example must be explained in more details. We know that S K and K I are not identical as terms: S K ≢ K I. But if we apply both , respectively, to any two arguments X and Y, we see a relatedness:
S K X Y ⊳ K Y (X Y) ⊳ Y
K I X Y ⊳ I Y ⊳ Y
and of course Y ≡ Y, for any Y.
Of course, we cannot "try" such relatedness for each possible X and Y argument instances, because there can be can be infinitely many such CL-term instances to be substituted into these metavariables. But we do not have to stuck in this problem of infinity. If we augment our object language (combinatory logic) with (free) variables:
K is a term
S is a term
Any (free) variable is a term (new line, this is the modification!)
If both X and Y are terms, then also (X Y) is a term
Terms cannot be obtained in any other way
and we define reduction rules accordingly, in a proper way, then we can state an extensional definition of equality in a "finite" way, without relying on metavariables with infinite possible instances.
Thus, if free variables are allowed in combinatory logic terms (object language is augmented with its own object variables), then extensionality can be implemented to some degree. I have not yet considered this. As for the example above, we can use a notation
S K =2 K I
(Curry & Feys & Craig 1958: 162, = 5C 5), based on the fact that S K x y and K I x y can be proven to be equal (already without resorting to extensionality). Here, x and y are not metavariables for infinitely many possible instances of CL-terms in equation schemes, but are first-class citizens of the object language itself. Thus, this equation is no more an equation scheme, but one single equation.
As for the theoretical study, we can mean = by the "union" of =n instances for all n.
Alternatively, equality can be defined so that its inductive definition takes also extensionality in consideration. We add one further rule of inference dealing with extensionality (Csörnyei 2007: 158):
...
...
If E, F are combinators, x is an (object) variable, and x is not contained neither in E nor in F, then, from E x = F x we can infer E = F
The constraint about not-containing is important, as the following counterexample shows: K x ≠ I, despite of being K x x = I x. The "roles" of the two (incidentally identic) variable occurrences differ entirely. Excluding such incidence, that is the motivation of the constraint.
-
The use of this new rule of inference can be exemplified by showing how theorem S K x = K I x can be proven:
S K = K I is regarded to hold because S K x = K I x has already been proven to hold, see proof below:
S K x = K I x is regarded to hold because S K x y = K I x y has already been proven to hold, see below:
S K x y = K I x y can be proven without resorting to extensionality, we need only the familiar conversion rules.
What these remaining rules of inferences are? Here are they listed (Csörnyei 2007: 157):
Conversion axiom schemes:
``K E F = E'' is deducible (K-axiom scheme)
``S F G H = F H (G H)'' is deducible (S-axiom scheme)
Equality axiom schemes and rules of inference
``E = E'' is deducible (Reflexivity axiom scheme)
If "E = F" is deducible, then "F = E" is also deducible (Symmetry rule of inference)
If "E = F" is deducible, and "F = G" is deducible too, then also "E = G" is reducible (Transitivity rule)
If "E = F" is deducible, then "E G = F G" is also deducible (Leibniz rule I)
If "E = F" is deducible, then "G E = G F" is also deducible (Leibniz rule II)
References
Csörnyei, Zoltán (2007): Lambda-kalkulus. A funkcionális programozás alapjai. Budapest: Typotex. ISBN-978-963-9664-46-3.
Curry, Haskell B. & Feys, Robert & Craig, William (1958). Combinatory Logic. Vol. I. Amsterdam: North-Holland Publishing Company.
Madore, David (2003). The Unlambda Programming Language. Unlambda: Your Functional Programming Language Nightmares Come True.
Monk, J. Donald (1976). Mathematical Logic. Graduate Texts in Mathematics. New York • Heidelberg • Berlin: Springer-Verlag.
Tromp, John (1999). Binary Lambda Calculus and Combinatory Logic. Downloadable in PDF and Postscript from the author's John's Lambda Calculus and Combinatory Logic Playground.
Appendix
Böhm's theorem
I have not explained yet clearly how Böhm's theorem is related to the fact that in most practical cases, a suitable equality testing function can surely be written for a meaningful datatype (even in such minimalistic functional languages like pure lambda calculus or combinatory logic).
Statement
Let E and F be two different, closed terms of lambda calculus,
and let both of them have normal forms.
Then, the theorem claims, there is a suitable way for testing equality with applying them to a suitable series of arguments. In other words: there exists a natural number n and a series of closed lambda-terms G1, G2, G3,... Gn such that applying them to this series of arguments reduces to false and true, respectively:
E G1 G2 G3... Gn ⊳ false
F G1 G2 G3... Gn ⊳ true
where true and false are the two well-known, lamb-tame, easily manageable and distinguishable lambda terms:
true ≡ λ x y . x
false ≡ λ x y . y
Application
How this theorem can be exploited for implementing practical datatypes in pure lambda calculus? An implicit application of this theorem is exemplified by the way linked list can be defined in combinatory logic (Tromp 1999: Sec 2).
(==) is part of the type class Eq. A separate implementation is provided by each type that instances Eq. So to find the implementation you should usually look at where your type is defined.
Smells like homework to me. Elaborate on why you find it tricky.
You might look at how ML and various Lisps attempt to solve the problem.
You might also look in the source code of one of other languages interpreters/compilers, some are written with study in mind.

Resources