Transitive closure with quantifiers in Alloy - alloy

For the setup, I have a set of theorems, each with a set of values as a premise, a label, and a value it proves.
1. A /\ B => C
2. C => D
2. A => D // note same label as C => D
3. B /\ C => V
4. ...
I also have the following constraints:
Labels and premises are fixed for each theorem. One theorem always belongs to label 1 and always has A /\ B as a premise, theorems C => ? and D => ? both always belong to label 2, etc. There may be distinct theorems with the same premise belonging to different labels. For example, it's possible that I could have 4. C => A, even thought we already have 2. C => D.
All premises are of the form A /\ B /\ ... /\ N. I will never have A /\ (B \/ C) as a premise, nor will I have ~A. But I could have two theorems that share a label, one with A /\ B as a premise and one with A /\ C.
The values each theorem proves is variable, and in fact is the only thing that varies. Each theorem may prove at most one value.
All theorems with the same label must prove the same value. If 2. C => (does not prove anything), then I must also have 2. A =>. At most one label can prove a given value. This means it makes sense to write this example as 1. C 2. D 3. V ...
A value is "free" if no theorem proves it. V is never free.
A value is "provable" if it is A) free, B) belongs to a theorem where the premise is satisfiable with provable values.
A model is valid if V is provable. In this case it is, since A and B are free, which gets us C, which gets us V. However, 1. A 2. C 3. V is invalid. What I'm trying to do is figure out which additional facts are required to make all possible models valid. For example, that counterexample disappears if we add a fact that says "A proved value can't be its own premise.
Here's an alloy model representing this:
abstract sig Label {
proves: disj lone Value
}
one sig L1, L2, LV extends Label{}
abstract sig Value{}
one sig A, B, C, D, V extends Value {}
sig Theorem {
premise: Value set -> Label
}
fun free: set Value {
Value - Label.proves
}
pred solvable(v: Value) {
v in free or // ???
}
pred Valid {
solvable[V]
}
pred DefaultTheorems {
one disj T1, T2, T3, T4: Theorem | {
#Theorem = 4
T1.premise = (A + B) -> L1
T2.premise = C -> L2
T3.premise = A -> L2
T4.premise = (B + C) -> LV
}
LV.proves = V
}
check { DefaultTheorems => Valid } for 7
The problem I have is in the solvable predicate. I need it to obey the conjunctions and work for an arbitrary depth. One possible solution would be to use the transitive closure. But if I do v in free or v in free.*(Theorem.(premise.proves)), the model becomes too permissive. It would say that if C is free, then A /\ C -> A is provable. This is because Alloy does not permit sets inside sets, so it collapses {A C} -> A into A -> C and A -> A.
On the other hand, I could write it as
pred solvable(v: Value) {
v in free or some t: Theorem |
let premise' = (t.premise).(proves.v) |
some v' and all v': premise' | solvable[v']
But this is very slow and also has a maximum recursion depth of 3. Is there a way to get the speed and arbitrary depth of using a closure with the accuracy of using a quantifier? I suppose I could add a trace, where each step successively proves more values, but it seems odd to enforce an ordering on a system that doesn't need it.

After a lot of experimentation, I've decided that the only good way to do this is with a trace. Here's the final spec if anybody's interested:
open util/ordering[Step]
sig Step {}
abstract sig Label {
proves: disj lone Value
}
one sig L1, L2, LV extends Label{}
abstract sig Value {
proven_at: set Step
}
one sig A, B, C, D, V extends Value {}
sig Theorem {
premise: Value set -> Label
}
fun free: set Value {
Value - Label.proves
}
pred solvable(v: Value, s: Step) {
v in proven_at.s or
some t: Theorem |
let premise' = (t.premise).(proves.v) |
some premise' and all v': premise' |
v' in proven_at.s
}
pred Valid {
solvable[V, last]
}
fact Trace {
free = proven_at.first
all s: Step - last |
let s' = s.next |
proven_at.s' = proven_at.s + {v: Value | solvable[v, s]}
}
pred DefaultTheorems {
one disj T1, T2, T3, T4: Theorem | {
#Theorem = 4
T1.premise = (A + B) -> L1
T2.premise = C -> L2
T3.premise = A -> L2
T4.premise = (B + C) -> LV
}
LV.proves = V
}
check { DefaultTheorems => Valid } for 8 but 4 Step

Related

Representing a theorem with multiple hypotheses in Lean (propositional logic)

Real beginners question here. How do I represent a problem with multiple hypotheses in Lean? For example:
Given
A
A→B
A→C
B→D
C→D
Prove the proposition D.
(Problem taken from The Incredible Proof Machine, Session 2 problem 3. I was actually reading Logic and Proof, Chapter 4, Propositional Logic in Lean but there are less exercises available there)
Obviously this is completely trivial to prove by applying modus ponens twice, my question is how do I represent the problem in the first place?! Here's my proof:
variables A B C D : Prop
example : (( A )
/\ ( A->B )
/\ ( A->C )
/\ ( B->D )
/\ ( C->D ))
-> D :=
assume h,
have given1: A, from and.left h,
have given2: A -> B, from and.left (and.right h),
have given3: A -> C, from and.left (and.right (and.right h)),
have given4: B -> D, from and.left (and.right (and.right (and.right h))),
have given5: C -> D, from and.right (and.right (and.right (and.right h))),
show D, from given4 (given2 given1)
Try it!
I think I've made far too much a meal of packaging up the problem then unpacking it, could someone show me a better way of representing this problem please?
I think it is a lot clearer by not using And in the hypotheses instead using ->. here are 2 equivalent proofs, I prefer the first
def s2p3 {A B C D : Prop} (ha : A)
(hab : A -> B) (hac : A -> C)
(hbd : B -> D) (hcd : C -> D) : D
:= show D, from (hbd (hab ha))
The second is the same as the first except using example,
I believe you have to specify the names of the parameters using assume
rather than inside the declaration
example : A -> (A -> B) -> (A -> C) -> (B -> D) -> (C -> D) -> D :=
assume ha : A,
assume hab : A -> B,
assume hac, -- You can actually just leave the types off the above 2
assume hbd,
assume hcd,
show D, from (hbd (hab ha))
if you want to use the def syntax but the problem is e.g. specified using example syntax
example : A -> (A -> B) -> (A -> C)
-> (B -> D) -> (C -> D) -> D := s2p3
Also, when using and in your proof, in the unpacking stage
You unpack given3, and given 5 but never use them in your "show" proof.
So you don't need to unpack them e.g.
example : (( A )
/\ ( A->B )
/\ ( A->C )
/\ ( B->D )
/\ ( C->D ))
-> D :=
assume h,
have given1: A, from and.left h,
have given2: A -> B, from and.left (and.right h),
have given4: B -> D, from and.left (and.right (and.right (and.right h))),
show D, from given4 (given2 given1)

Is this an accurate example of a Haskell Pullback?

I'm still trying to grasp an intuition of pullbacks (from category theory), limits, and universal properties, and I'm not quite catching their usefulness, so maybe you could help shed some insight on that as well as verifying my trivial example?
The following is intentionally verbose, the pullback should be (p, p1, p2), and (q, q1, q2) is one example of a non-universal object to "test" the pullback against to see if things commute properly.
-- MY DIAGRAM, A -> B <- C
type A = Int
type C = Bool
type B = (A, C)
f :: A -> B
f x = (x, True)
g :: C -> B
g x = (1, x)
-- PULLBACK, (p, p1, p2)
type PL = Int
type PR = Bool
type P = (PL, PR)
p = (1, True) :: P
p1 = fst
p2 = snd
-- (g . p2) p == (f . p1) p
-- TEST CASE
type QL = Int
type QR = Bool
type Q = (QL, QR)
q = (152, False) :: Q
q1 :: Q -> A
q1 = ((+) 1) . fst
q2 :: Q -> C
q2 = ((||) True) . snd
u :: Q -> P
u (_, _) = (1, True)
-- (p2 . u == q2) && (p1 . u = q1)
I was just trying to come up with an example that fit the definition, but it doesn't seem particularly useful. When would I "look for" a pull back, or use one?
I'm not sure Haskell functions are the best context
in which to talk about pull-backs.
The pull-back of A -> B and C -> B can be identified with a subset of A x C,
and subset relationships are not directly expressible in Haskell's
type system. In your specific example the pull-back would be
the single element (1, True) because x = 1 and b = True are
the only values for which f(x) = g(b).
Some good "practical" examples of pull-backs may be found
starting on page 41 of Category Theory for Scientists
by David I. Spivak.
Relational joins are the archetypal example of pull-backs
which occur in computer science. The query:
SELECT ...
FROM A, B
WHERE A.x = B.y
selects pairs of rows (a,b) where a is a row from table A
and b is a row from table B and where some function of a
equals some other function of b. In this case the functions
being pulled back are f(a) = a.x and g(b) = b.y.
Another interesting example of a pullback is type unification in type inference. You get type constraints from several places where a variable is used, and you want to find the tightest unifying constraint. I mention this example in my blog.

Why do we need containers?

(As an excuse: the title mimics the title of Why do we need monads?)
There are containers [1] (and indexed ones [2]) (and hasochistic ones [3]) and descriptions.[4] But containers are problematic [5] and to my very small experience it's harder to think in terms of containers than in terms of descriptions. The type of non-indexed containers is isomorphic to Σ — that's quite too unspecific. The shapes-and-positions description helps, but in
⟦_⟧ᶜ : ∀ {α β γ} -> Container α β -> Set γ -> Set (α ⊔ β ⊔ γ)
⟦ Sh ◃ Pos ⟧ᶜ A = ∃ λ sh -> Pos sh -> A
Kᶜ : ∀ {α β} -> Set α -> Container α β
Kᶜ A = A ◃ const (Lift ⊥)
we are essentially using Σ rather than shapes and positions.
The type of strictly-positive free monads over containers has a rather straightforward definition, but the type of Freer monads looks simpler to me (and in a sense Freer monads are even better than usual Free monads as described in the paper [6]).
So what can we do with containers in a nicer way than with descriptions or something else?
References
Abbott, Michael, Thorsten Altenkirch, and Neil Ghani. "Containers: Constructing strictly positive types." Theoretical Computer Science 342, no. 1 (2005): 3-27.
Altenkirch, Thorsten, Neil Ghani, Peter Hancock, Conor McBride, and PETER MORRIS. 2015. “Indexed Containers.” Journal of Functional Programming 25. Cambridge University Press: e5. doi:10.1017/S095679681500009X.
McBride, Conor. "hasochistic containers (a first attempt)." Jun, 2015.
Chapman, James, Pierre-Evariste Dagand, Conor Mcbride, and Peter Morris. "The gentle art of levitation." In ICFP 2010, pp. 3-14. 2010.
Francesco. "W-types: good news and bad news." Mar 2010.
Kiselyov, Oleg, and Hiromi Ishii. "Freer monads, more extensible effects." In 8th ACM SIGPLAN Symposium on Haskell, Haskell 2015, pp. 94-105. Association for Computing Machinery, Inc, 2015.
To my mind, the value of containers (as in container theory) is their uniformity. That uniformity gives considerable scope to use container representations as the basis for executable specifications, and perhaps even machine-assisted program derivation.
Containers: a theoretical tool, not a good run-time data representation strategy
I would not recommend fixpoints of (normalized) containers as a good general purpose way to implement recursive data structures. That is, it is helpful to know that a given functor has (up to iso) a presentation as a container, because it tells you that generic container functionality (which is easily implemented, once for all, thanks to the uniformity) can be instantiated to your particular functor, and what behaviour you should expect. But that's not to say that a container implementation will be efficient in any practical way. Indeed, I generally prefer first-order encodings (tags and tuples, rather than functions) of first-order data.
To fix terminology, let us say that the type Cont of containers (on Set, but other categories are available) is given by a constructor <| packing two fields, shapes and positions
S : Set
P : S -> Set
(This is the same signature of data which is used to determine a Sigma type, or a Pi type, or a W type, but that does not mean that containers are the same as any of these things, or that these things are the same as each other.)
The interpretation of such a thing as a functor is given by
[_]C : Cont -> Set -> Set
[ S <| P ]C X = Sg S \ s -> P s -> X -- I'd prefer (s : S) * (P s -> X)
mapC : (C : Cont){X Y : Set} -> (X -> Y) -> [ C ]C X -> [ C ]C Y
mapC (S <| P) f (s , k) = (s , f o k) -- o is composition
And we're already winning. That's map implemented once for all. What's more, the functor laws hold by computation alone. There is no need for recursion on the structure of types to construct the operation, or to prove the laws.
Descriptions are denormalized containers
Nobody is attempting to claim that, operationally, Nat <| Fin gives an efficient implementation of lists, just that by making that identification we learn something useful about the structure of lists.
Let me say something about descriptions. For the benefit of lazy readers, let me reconstruct them.
data Desc : Set1 where
var : Desc
sg pi : (A : Set)(D : A -> Desc) -> Desc
one : Desc -- could be Pi with A = Zero
_*_ : Desc -> Desc -> Desc -- could be Pi with A = Bool
con : Set -> Desc -- constant descriptions as singleton tuples
con A = sg A \ _ -> one
_+_ : Desc -> Desc -> Desc -- disjoint sums by pairing with a tag
S + T = sg Two \ { true -> S ; false -> T }
Values in Desc describe functors whose fixpoints give datatypes. Which functors do they describe?
[_]D : Desc -> Set -> Set
[ var ]D X = X
[ sg A D ]D X = Sg A \ a -> [ D a ]D X
[ pi A D ]D X = (a : A) -> [ D a ]D X
[ one ]D X = One
[ D * D' ]D X = Sg ([ D ]D X) \ _ -> [ D' ]D X
mapD : (D : Desc){X Y : Set} -> (X -> Y) -> [ D ]D X -> [ D ]D Y
mapD var f x = f x
mapD (sg A D) f (a , d) = (a , mapD (D a) f d)
mapD (pi A D) f g = \ a -> mapD (D a) f (g a)
mapD one f <> = <>
mapD (D * D') f (d , d') = (mapD D f d , mapD D' f d')
We inevitably have to work by recursion over descriptions, so it's harder work. The functor laws, too, do not come for free. We get a better representation of the data, operationally, because we don't need to resort to functional encodings when concrete tuples will do. But we have to work harder to write programs.
Note that every container has a description:
sg S \ s -> pi (P s) \ _ -> var
But it's also true that every description has a presentation as an isomorphic container.
ShD : Desc -> Set
ShD D = [ D ]D One
PosD : (D : Desc) -> ShD D -> Set
PosD var <> = One
PosD (sg A D) (a , d) = PosD (D a) d
PosD (pi A D) f = Sg A \ a -> PosD (D a) (f a)
PosD one <> = Zero
PosD (D * D') (d , d') = PosD D d + PosD D' d'
ContD : Desc -> Cont
ContD D = ShD D <| PosD D
That's to say, containers are a normal form for descriptions. It's an exercise to show that [ D ]D X is naturally isomorphic to [ ContD D ]C X. That makes life easier, because to say what to do for descriptions, it's enough in principle to say what to do for their normal forms, containers. The above mapD operation could, in principle, be obtained by fusing the isomorphisms to the uniform definition of mapC.
Differential structure: containers show the way
Similarly, if we have a notion of equality, we can say what one-hole contexts are for containers uniformly
_-[_] : (X : Set) -> X -> Set
X -[ x ] = Sg X \ x' -> (x == x') -> Zero
dC : Cont -> Cont
dC (S <| P) = (Sg S P) <| (\ { (s , p) -> P s -[ p ] })
That is, the shape of a one-hole context in a container is the pair of the shape of the original container and the position of the hole; the positions are the original positions apart from that of the hole. That's the proof-relevant version of "multiply by the index, decrement the index" when differentiating power series.
This uniform treatment gives us the specification from which we can derive the centuries-old program to compute the derivative of a polynomial.
dD : Desc -> Desc
dD var = one
dD (sg A D) = sg A \ a -> dD (D a)
dD (pi A D) = sg A \ a -> (pi (A -[ a ]) \ { (a' , _) -> D a' }) * dD (D a)
dD one = con Zero
dD (D * D') = (dD D * D') + (D * dD D')
How can I check that my derivative operator for descriptions is correct? By checking it against the derivative of containers!
Don't fall into the trap of thinking that just because a presentation of some idea is not operationally helpful that it cannot be conceptually helpful.
On "Freer" monads
One last thing. The Freer trick amounts to rearranging an arbitrary functor in a particular way (switching to Haskell)
data Obfuncscate f x where
(:<) :: forall p. f p -> (p -> x) -> Obfuncscate f x
but this is not an alternative to containers. This is a slight currying of a container presentation. If we had strong existentials and dependent types, we could write
data Obfuncscate f x where
(:<) :: pi (s :: exists p. f p) -> (fst s -> x) -> Obfuncscate f x
so that (exists p. f p) represents shapes (where you can choose your representation of positions, then mark each place with its position), and fst picks out the existential witness from a shape (the position representation you chose). It has the merit of being obviously strictly positive exactly because it's a container presentation.
In Haskell, of course, you have to curry out the existential, which fortunately leaves a dependency only on the type projection. It's the weakness of the existential which justifies the equivalence of Obfuncscate f and f. If you try the same trick in a dependent type theory with strong existentials, the encoding loses its uniqueness because you can project and tell apart different choices of representation for positions. That is, I can represent Just 3 by
Just () :< const 3
or by
Just True :< \ b -> if b then 3 else 5
and in Coq, say, these are provably distinct.
Challenge: characterizing polymorphic functions
Every polymorphic function between container types is given in a particular way. There's that uniformity working to clarify our understanding again.
If you have some
f : {X : Set} -> [ S <| T ]C X -> [ S' <| T' ]C X
it is (extensionally) given by the following data, which make no mention of elements whatsoever:
toS : S -> S'
fromP : (s : S) -> P' (toS s) -> P s
f (s , k) = (toS s , k o fromP s)
That is, the only way to define a polymorphic function between containers is to say how to translate input shapes to output shapes, then say how to fill output positions from input positions.
For your preferred representation of strictly positive functors, give a similarly tight characterisation of the polymorphic functions which eliminates abstraction over the element type. (For descriptions, I would use exactly their reducability to containers.)
Challenge: capturing "transposability"
Given two functors, f and g, it is easy to say what their composition f o g is: (f o g) x wraps up things in f (g x), giving us "f-structures of g-structures". But can you readily impose the extra condition that all of the g structures stored in the f structure have the same shape?
Let's say that f >< g captures the transposable fragment of f o g, where all the g shapes are the same, so that we can just as well turn the thing into a g-structure of f-structures. E.g., while [] o [] gives ragged lists of lists, [] >< [] gives rectangular matrices; [] >< Maybe gives lists which are either all Nothing or all Just.
Give >< for your preferred representation of strictly positive functors. For containers, it's this easy.
(S <| P) >< (S' <| P') = (S * S') <| \ { (s , s') -> P s * P' s' }
Conclusion
Containers, in their normalized Sigma-then-Pi form, are not intended to be an efficient machine representation of data. But the knowledge that a given functor, implemented however, has a presentation as a container helps us understand its structure and give it useful equipment. Many useful constructions can be given abstractly for containers, once for all, when they must be given case-by-case for other presentations. So, yes, it is a good idea to learn about containers, if only to grasp the rationale behind the more specific constructions you actually implement.

Showing derived relations

Consider a simple graphical structure G that defines a couple of relations (r1 and r2) over a set X of nodes. I want to talk about whether my graphs have a certain property called wf_G. This property is defined by deriving a further relation r3 from r1 and r2, and then constraining r3.
sig X {}
sig G { r1, r2 : X -> X }
pred wf_G [g : G] {
let r3 = (g.r1 - iden) . (g.r2 - iden) . (g.r2 - iden) |
one r3
}
run wf_G for 1 G, 2 X
(I should say: this is very much a toy example.)
The thing is, r3 is not shown in the Visualizer, because it is a let-defined relation. I would like it to be shown in the Visualizer because otherwise I would have to derive it manually in my head. Is there a way to (for instance) annotate the let statement to instruct the Visualizer to include the derived relation, e.g. like the following?
let {show} r3 = (g.r1 - iden) . (g.r2 - iden) . (g.r2 - iden) |
My current workaround is to include r3 in the signature of G, and then constrain r3 according to its definition in terms of r1 and r2. That is, I have been writing:
sig X {}
sig G { r1, r2, r3 : X -> X }
pred wf_G [g : G] {
(g.r3) = (g.r1 - iden) . (g.r2 - iden) . (g.r2 - iden)
&&
one (g.r3)
}
run wf_G for 1 G, 2 X
This is less appealing than my original code because
it conflates the primitive relations r1 and r2 with the derived relation r3, and
it feels less computationally efficient to allow r3 to be initially any relation, and then to constrain it to be a particular relation (though I haven't run timing tests to check whether this is the case).
Edit. Daniel has suggested encoding r3 as a 0-ary function. I don't see how this can be done, but I can see how a 1-ary function would work:
sig X {}
sig G { r1, r2 : X -> X }
fun r3 [g : G] : X -> X {
(g.r1 - iden) . (g.r2 - iden) . (g.r2 - iden)
}
pred wf_G [g : G] {
one r3[g]
}
run wf_G for 1 G, 2 X
If r3 is encoded as a function like this, is it possible to show it in the visualiser? That would certainly solve my problem very satisfactorily.
If you declare a function of no arguments, it will be treated as a skolem constant and will be available in the visualizer as a relation to be displayed under the standard customizations. Might that work for you?

No 3-element counterexample found

I am learning Alloy and was trying to make it find two binary relations r and s, such that s equals the transitive closure of r, and such that s is not equal to r. I suppose I can ask Alloy to do this by executing the following:
sig V
{
r : V,
s : V
}
assert F { not ( some (s-r) and s=^r ) }
check F
Now Alloy 4.2 cannot find a counterexample, although an easy 3-element structure would be the one where r = {(V0,V1), (V1,V2)} and s = r + {(V0,V2)} obviously.
Can someone explain what is going on?
Translating your requirement directly:
// find two binary relations r and s such that
// s equals the transitive closure of r ands is not equal to r
run {some r, s: univ -> univ | s != r and s = ^r}
This gives an instance as expected. The mistake in your spec is that your declarations restrict the relations to be functions; should instead have
sig V {
r: set V,
s: set V
}
or
sig V {r, s: set V}

Resources